This paper reports a research aimed to design a confidence measure for an agent who learns and uses probabilistic models of other agents' behavior. If the agent has several alternative models of a particular opponent, she can use the confidence values to combine the models or choose among them. Suggestions can be found in the literature from the field of probabilistic reasoning that such a measure can be based on the variance of the beliefs evolution process. Thus, a confidence measure for models belonging to this class is proposed in this paper, based on some self-observation on the agent's part through computing the aggregate (decayed) variance of the learning process. As the variance-based measure turns out to have serious disadvantages, another one is proposed and studied as well. The second measure has been inspired by the research on universal prediction, and based on the self-information loss function. Both measures are verified through some simple experiments with simulated software agents.
Keywords: multiagent systems, confidence measure, machine learning, user modeling.
|Computational Intelligence Group @ Technical University of Clausthal|
|Human Media Interaction Group @ University of Twente|
|Computer Science Group @ University of Gdansk||Last modified 2002-12-18|