<< , >> , up , Title , Contents

2. Autonomous Agent Learning

Multi-agent learning could be seen as an extension of autonomous agent learning. But could the study of multi-agent learning really benefit from the results that have been achieved in autonomous learning?

The application of symbolic AI to robotics reveals one of its major weaknesses, namely that low-level processes are taken for granted. Much work on learning robots has therefore concentrated on learning the details of action execution and its effects, and on learning about the semantic relation between symbolic representations and reality which they represent[1]. Most of this work has been concerned with a single agent. There are some exceptions, however. For example, Alberto Segre<<s ARMS system learns how to plan from observations of plan executions of a teacher agent. Furthermore, John Laird<<s Robo-Soar, apart from being an application of SOAR to a real world and object manipulation task, permits accepting advice from another agent. In the World Modellers Project the goal was to experiment with learning from observations of another agent. The other agent appears solely in the role of a teacher and hence communication between the agents is of a somewhat special kind. Although the investigations into learning robots on a number of important issues, the questions related to communication, cooperation and goal definition have, in general, been left aside. This does not mean that this work is not of potential interest.

Mitchell (1990) describes an autonomous agent (THEO) as having three learning goals: becoming more perceptive, more correct and more reactive. At the moment there is no consensus as to how to map the learning goals to learning methods. Future work could provide some of the answers not only to the questions we have mentioned, but also to the following related issues. When is it better to reason and when to act? When should the system initiate learning and how long should it continue?


<< , >> , up , Title , Contents