The design of MOO agents: Implications from a study on multi-modal collaborative problem solving.

1. Introduction: Types of Agents

The notion of agent has been present more or less explicitly in Intelligent Learning Environments for some time. Similar types of agents are now investigated in Internet-based systems. We classify several types of internet agents, and present several aspects of interaction in task-based collaboration that a participant must be attuned to. These aspects come from an empirical study of multimodal human collaboration. Attending to these aspects will not only be crucial for agents participating in a collaboration, but for other types of agents. We propose a new type of agent, an observer, that calculates statistics regarding collaboration.

Sub-agents. These autonomous software entities which "carry out tasks for the user" are deemed by many to be the next revolution in computing. Since the World Wide Web (WWW) provides an evolving environment of voluminous amounts of unorganized information. Information sources appear and disappear, making information filtering a crucial capability of any Internet-based information gathering system. To date most research on Intelligent Information Agents has dealt with a user interacting with a single agent that has general knowledge and is capable of performing a variety of user delegated information finding tasks (e.g., Etzioni & Weld, 1994; Lieberman, 1995). Other examples of tasks accomplished by such agents are e-mail filtering (Lashkari & al., 1995) room reservation and calendar managing (Bocionek, 1995) or finding people sharing same interests (Foner, 1996). Many of these systems use machine learning techniques to non-intrusively infer the user preferences. Distributed multi-agent systems have been used to overcome limits of machine-learning methods such as: (1) a single general agent need an enormous amount of knowledge to deal effectively with user information requests that cover a variety of tasks, (2) a centralized information agent constitutes a processing bottleneck, the required processing would overwhelm a single agent which would constitute a "single point of failure", (3) a single agent needs considerable reprogramming to deal with the appearance of new agents and information sources in the environment.

Co-agents. In symmetrical systems (Dillenbourg and Baker, 1996), human and artificial agents can perform the same actions. The idea of co-learner, originally introduced as an alternative to learner modelling (Self, 1986), was re-used within various learning paradigms: collaborative learning (Chan & Baskin, 1988; Dillenbourg and Self, 1992), competitive activities, reciprocal tutoring[1] (Chan, 1996), learning by teaching and teacher training (Ur & van Lehn, 1995).

Super-agents. Most intelligent learning environments include agents (coach, tutor, expert, ...) which provide solutions and monitor the actions of users. In the case of multi-user learning environments, super-agents have to monitor the interactions among users. They can be teachers who analyze interactions to detect when they should intervene, judges or referees who intervene only in case of conflict, etc. For instance, the Belvedere system (Suthers & al., 1995) is dedicated to support critical discussions of science issues. Students build a diagrammatic representation with arguments. They can invoke an advisor who points to specific parts of the diagram and proposes ways of extending or revising it. Some argument structures are expected to be more effective in stimulating critical discussions among students. In COSOFT (Hoppe, 1995), a super-agent compares its model of on-line students and invites students with specific lacks to collaborate with students who possess the missing skills. Critic systems can also be viewed as super-agents, since they evaluate the user's work, but without a pedagogical perspective: "Critics do not necessarily solve problems for users. The core task of critics is the recognition and communication of deficiencies in a product to the user." (Fisher et al.; 1991).

Observers. We propose a fourth category of agents, observers, i.e. agents who collect information regarding users interactions, aggregate observations into high level indicators and display these indicators to the human coach or to the users themselves. One cannot a priori set up collaborative settings which guarantee effective collaboration, hence a coach must monitor the interactions (Dillenbourg et al, 1995). Observers would support this monitoring process both for human and artificial coaches. These indicators could also be shown to the subjects. We intend to study how a pair could use them to regulate its interactions. We are then close to another approach in which shared external representations are designed to replace intelligent system advice. For instance, Ploetzner et al. (1996) study problem-solving in physics and use a computerized tool which allows subjects to collaboratively build concept maps. These external representations have been designed in order to help students to link qualitative and quantitative knowledge.


[1] In reciprocal tutoring, the learners alternatively play the role of a tutor or a tutee (Palincsar & Brown, 1984).

The design of MOO agents: Implications from a study on multi-modal collaborative problem solving - 21 MARCH 1997

Generated with Harlequin WebMaker