15th European Conference on Artificial Intelligence
|
July 21-26 2002 Lyon France |
[full paper] |
Jerome Chapelle, Olivier Simonin, Jacques Ferber
This paper addresses the problem of cooperation between learning situated agents. We present an agent’s architecture based on a satisfaction measure that ensures altruistic behaviors in the system. These cooperative behaviors are obtained by reaction to local signals emitted by the agents following their satisfaction. Then, we introduce in this architecture a reinforcement learning module in order to improve individual and collective behaviors. The satisfaction model and the local signals are used to define a compact representation of agents’ interactions and to compute rewards of the behaviors. Thus agents learn to select behaviors that are well adapted to their neighbor’s activities. Simulations of heterogeneous robots working on a foraging problem demonstrate the interest of the approach.
Keywords: Autonomous Agents, Reinforcement Learning, Multi-Agent Systems, Robotics
Citation: Jerome Chapelle, Olivier Simonin, Jacques Ferber: How Situated Agents can Learn to Cooperate by Monitoring their Neighbors' Satisfaction. In F. van Harmelen (ed.): ECAI2002, Proceedings of the 15th European Conference on Artificial Intelligence, IOS Press, Amsterdam, 2002, pp.68-72.