A consideration of the relationship between miscommunication and learning leads to a perspective which is very different from that of considering efficiency of communication alone. If one simply measures the efforts produced by two actors to successfully communicate a message, then the presence of noise can only be detrimental. The necessity for repair acts simply increases the cost of communication; the ideal communication process is free of noise. While the principle of least collaborative effort already provides a rationale for why some repair acts may be more efficient than spending the resources on performing only perfectly interpretable communicative acts, in a learning context, the goal is not even to minimize cognitive effort, but to maximize learning.
When miscommunication forces an agent to rephrase, to explain, to justify, the agent also performs more processing of her own knowledge. Studies on the self-explanation effect have shown that this extra processing leads to improved knowledge [Chi et al.1989]. Most attempts to understand the effects of collaborative learning focus on the various mechanisms of knowledge elicitation [Dillenbourg et al. 1995]. Our long term research goal is to transpose these psychological observations into useful design principles for AI systems. This transfer cannot be done within traditional architectures which strictly separate a dialogue interface layer from the core reasoning engine. In order to understand why grounding mechanisms have cognitive effects, we have to model how grounding acts impact on pure cognition [DillenbourgTo appear]. As an example, rephrasing is often not completely neutral, it often induces a slight change on the claim or hypothesis being made. This kind of rephrasing is also shown in rephrasing requests, as in the notebooks example in Table 4.
Subsequently, when designing artifacts to support collaborative learning, the aim is not to suppress any chance of miscommunication (if that is even possible), but to provide agents with the resources necessary to get some benefit from communication repairs. These resources can be external representations to which both agents can refer and can use to check to which extent they really agree [Roschelle1990] or structured communicative interfaces which support negotiation [Baker and Lund1996]. We also consider how the resources in our domain (the shared whiteboard systems and various advanced MOO commands, as described in section 4) are used by the agents to repair dialogue in collaborative problem solving. However, providing external resources (such as the notebook, or the whiteboard) does not eliminate miscommunication, since the subjects may also misunderstand the information available in the external resources.
Future work will move towards both objectives: explicating the motive factors involved in grounding, and as investigating the pedagogical utility of miscommunication and the grounding process. We will proceed towards these goals from two directions. First, in the short term, we are starting several sets of more focused experiments to analyze the importance of certain of these aspects (such as grounding of MOO position or whiteboard representation codes), in order to get a better sense of the costs and benefits of some of the information types. We also hope to investigate the role of representational tools in making collaboration more efficient.
Secondly, we plan to design an agent that can collaborate in a similar multi-modal fashion, using functionally equivalent grounding mechanisms to those found here. Such an agent will allow us to experiment more directly with arbitration strategies, and the relative importance of grounding and other action in collaboration.