Previous chapter

7. Experiments

We have conducted preliminary experiments with a few subjects. These experiments were too early to draw conclusions regarding the efficiency of MEMOLAB. Before extending experiments to samples supporting statistical inference, we have to incorporate these preliminary results into our systems. Experimentation pointed out several difficulties in the learner-expert interaction and this is important because this interaction is a key factor for role distribution among ETOILE's agents.

When the learner creates an event on MEMOLAB's workbench, the expert tries to map this event to task goals related to his own problem solving activity. In other words, the expert attempts to understand what the learner does. This interaction is illustrated by Figure 9. Bijective linking between objects on the screen and task goals was implemented in order to optimize the expert, but it raises several problems when an event may be attached to several task goals. If the expert has planned that Group-1 will wait for 10 seconds and Group-2 will wait for 40 seconds, the learner may very well have in mind that Group-1 will wait for 40 seconds and Group-2 for 10 seconds. We use the term `goal commutativity' to express the fact that the allocation of parameters to experimental samples is arbitrary, and that the expert must be able to permute his goals.

However, mapping student actions to task goals raised several issues. Firstly, learners don't necessarily have goals, but instead start to think about goals when they are proposed. This is not a bad point. The second issue concerns the way in which the expert describes his goals to the learner. For instance, in Figure 9, the expert's goal described in `choice 1' is actually that Group-2 waits for 40 seconds, but it would be nonsense to propose a goal concerning Group-2 when referring to an event concerning Group-1. Hence, the expert generalizes his specific goal in order to cover the learner's intentions. This led to ambiguous situations where the expert was suggesting different goals that were actually expressed by the same sentence, generating a misunderstanding between the learner and the expert.

Figure 9: If the expert cannot really attribute the learner actions to one of his own goals, he asks the learner what was his goal.

However, these difficulties gave us the opportunity to observe instances of `social grounding'. Social grounding is the mechanism through which each participant in a dialogue elaborates the belief that his partner has understood what he said, to a level sufficient for continuing the conversation (Clark & Brennan, 1991). In the protocols we encountered utterances (produced in French and translated by us) showing that the learner monitors the understanding that the system has about himself:

"He supposes that I wanted the subjects to do something during 40 seconds. I wanted the subjects to do noth
ing."
"He does not understand why I did that? "
"Precisely, what I want is that the subjects do nothing, I think that it is what he thinks"
"I am sure he will tell me again that I am wrong"
"He will ask me what I wanted to do. And the..., since... he'll base himself on wrong things since this is not what 
I want to do."
In other words, the learner-expert interaction is not simply based on mutual diagnosis with nested levels of beliefs: the expert diagnoses the learner, the learner diagnoses the expert, the learner diagnoses the expert's diagnosis,... From a theoretical viewpoint, one could go on about the diagnosis of the diagnosis of the diagnosis... However, in human-human dialogue, this cascade is not very long because human conversation is very rich in resources to detect (e.g. facial expressions) and repair (e.g. pointing) communication breakdowns. As soon as we reach the level "diagnosis on diagnosis", one speaker can repair the dialogue. In the fictitious example below, speaker 1 repairs the understanding that speaker 2 has about his first utterance.

Speaker 1:    "Numbers ending with 4 or 6 are even numbers"
Speaker 2:    "778 is also an even number"
Speaker 1:    "I did not say that those ending with 8 are not even"
When a speaker refines a previous statement to correct some misunderstanding, quite often he does not simply repeat what he said but reinterprets it from the point of view of his partner. The fact that the learner reinterprets his own actions within the expert's conceptual framework corresponds to a learning mechanism that has been emphasized in the socio-cultural approach: participation changes understanding (Newman, 1989; Rogoff, 1991). The benefits from collaborative work seem to be grounded in the mechanisms engaged to maintain a shared understanding of the problem (Roschelle,1990).

Through these results, the process of cognitive diagnosis appears as mutual and reciprocal. Instead of having one partner who builds a diagnosis of the other, we have two collaborators who try to build a mutual understanding (Bull, Pain & Brna, 1993). A similar evolution can be observed in work on explanation: the explanation is no longer viewed as a structure built by one partner and delivered to the other, but as a new structure jointly constructed by two partners during their interaction (Baker, 1993; Cawsey, 1993). This evolution of diagnosis and explanation techniques drives the evolution of cognitive science towards the idea of distributed cognition (Resnick et al, 1991; Shrage, 1991; Dillenbourg, to appear).

Next chapter