ECAI-2000 Logo

ECAI-2000 Conference Paper

[PDF] [full paper] [prev] [tofc] [next]

Empirical Comparison of Probabilistic and Possibilistic Markov Decision Processes Algorithms

Régis Sabbadin

Classical stochastic Markov Decision Processes (MDPs) and possibilistic MDPs aim at solving the same kind of problems, involving sequential decision under uncertainty.The underlying uncertainty model (probabilistic / possibilistic) and preference model (reward / satisfaction degree) change, but the algorithms, based on dynamic programming, are similar. So, a question maybe raised about when to prefer one model to another, and for which reasons. The answer may seem obvious when the uncertainty is of an objective nature (symmetry of the problem, frequentist information) and when the problem is faced repetitively and rewards accumulate. It is less clear when uncertainty and preferences are qualitative, purely subjective and when the problem is faced only once. In this paper we carry out an empirical comparison of both types of algorithms (stochastic and possibilistic), in terms of ``quality'' of the solutions, and time needed to compute them.

Keywords: Uncertainty in AI

Citation: Régis Sabbadin: Empirical Comparison of Probabilistic and Possibilistic Markov Decision Processes Algorithms. In W.Horn (ed.): ECAI2000, Proceedings of the 14th European Conference on Artificial Intelligence, IOS Press, Amsterdam, 2000, pp.586-590.


[prev] [tofc] [next]


ECAI-2000 is organised by the European Coordinating Committee for Artificial Intelligence (ECCAI) and hosted by the Humboldt University on behalf of Gesellschaft für Informatik.