<< , >> , up , Title , Contents

5.1 Utility of Learning

Learning in distributed systems opens new horizons. It makes us consider issues that have not been looked at earlier in machine learning. For example, it forces us to consider the question why a particular agent (in a community of agents) should want to learn? Designing agents that would learn about anything in the world goes against the basic philosophy of distributed AI.

We believe it is thus necessary to reason about the utility of learning. We notice that in most general architectures of intelligence (SOAR, THEO, PRODIGY, ICARUS) this issue has not really been paid attention to. This may be the reason why some systems are ill-behaved (the more they learn, the slowed they perform). We believe that addressing this point in the context of DAI will make it easier to find the appropriate answer(s).


<< , >> , up , Title , Contents