I was a teacher for elementary classroom, then I decide to explore the wild world, studied educational and psychological sciences in Belgium (Mons) and did my PhD in artificial intelligence in UK (Lancaster). I am now somethink like assistant professor in TECFA. My research work is mainly in the use of AI techniques in educational software. Over the last 5 years, I mainly worked on collaborative learning, both human-machine collaboration and machine-mediated comllaboration...

Dr. Pierre Dillenbourg

Maître d'enseignement et de recherche
Curriculum Vitae (postscript)
Current Research
Social Grounding in Computer Supported Collaborative Problem Solving
Intelligent Learning Environments
Collaborative learning: humans and machines
(ESF LHM Programme)
Old papers
Current work
Réalisation de logiciels interactifs
UVE7.2a / UV21
Interaction Personne-Machine
Réalisation de Logiciels Educatifs

(Some) Publications

  • P.Dillenbourg (1993),Multimédia et Formation: toujours les mêmes erreurs, CBT Forum, 2/93

  • Pierre Dillenbourg and John A. Self (1992), A computational approach to socially distributed cognition, European Journal of Psychology of Education, vol VII, no 4, 252-373.

    Abstract. In most Interactive Learning Environments, the human learner interacts with an expert in the domain to be taught. We explored a different approach: the system does not know more than the learner, but learns by interacting with him. A human-computer collaborative learning (HCCL) system includes a micro-world, in which two learners jointly try to solve problems and learn, the human learner and a computerized co-learner. This paper presents the foundations of this artificial co-learner. The collaboration between learners is modelled as 'socially distributed cognition' (SDC). The SDC model connects three ideas: i) a group is a cognitive system, ii) reflection is a dialogue with oneself, iii) social processes are internalised. The key has been to find a computational connection between those ideas. The domain chosen for illustration is the argumentation concerning how some changes to an electoral system affect the results of elections. This argumentation involves a sequence of arguments and their refutations. The basic principle is thatof this argumentation (dialogue pattern) and 'replay' it individually later on. The verbs 'store' and 'replay' do not refer to a simple 'record and retrieve' process. Storage is implemented as the incremental and parameterised evolution of a network of arguments, here called a 'dialogue pattern'. The learning outcome is a structuration of knowledge (rules) into situation-specific models, used to guide reasoning. We conducted experiments in two settings: with a human and an artificial learner or with two artificial learners. The common findings of these two experiments is that the SDC model generates learning effects provided that the discussion is intensive, i.e. that many arguments are brought into dialogue. The importance of this variable also appears in Hutchins' (1991) modelling of the evolution of the confirmation bias in groups. It is argued than computational models are heuristic tools, allowing researchers to isolate variables for designing empirical studies with human subjects.

  • Pierre Dillenbourg and John A. Self (1992), A Framework for Learner Modelling Interactive Learning Environments, 2 (2), 111-137. <

    Abstract This paper presents a comprehensive conceptual framework and notation for learner modelling in intelligent tutoring systems. The framework is based upon the computational distinction between behaviour, behavioural knowledge, and conceptual knowledge (in a 'vertical' dimension) and between the system, the learner, and the system's representation of the learner (in a 'horizontal' dimension). All existing techniques for learner modelling are placed within this framework. Methods for establishing the search space for learner models and for carrying out the search process are reviewed. The framework makes clear where particular learner modelling techniques are focussed and shows that they are often complementary since they address different parts of the framework.

  • Pierre Dillenbourg (1989), The Design of a Self-Improving Tutor: Proto-TEG, Instructional Science, Vol 18, no.3, 193-216.

    Abstract. This paper presents the principles and the architecture of PROTO-TEG, a self-improving tutor in geometry. This system is able to discover the criteria useful for selecting the didactic strategies it has at its disposal. These criteria are expressed as characteristics of the student model. They are elaborated by comparing student model states recorded when a strategy was effective and those recorded when the same strategy was not effective. This comparison is performed by machine learning methods, more precisely by learning concepts from examples. An empirical experiment was performed in order to assess the designed self-improving functions and conditions were discovered for five of the nine didactic strategies. However, this new knowledge did not lead to PROTO-TEG being more efficient in terms of student performance.

  • Pierre Dillenbourg (in press), Distributing cognition over humans and machines, to appear in: De Corte, Mandl, Glaser & Vosniadu, International Perspectives on the Psychological and Educational Foundations of Technology-Based Learning Environments (ASI Series)

    Abstract This chapter considers computer-based learning environments from a socio-cultural perspective. It relates several concepts from this approach with design principles and techniques specific to learning environments. We propose a metaphor intended to help designers of learning environments to make sense of system features within the socio-cultural perspective. This metaphor considers the software and the learner as a single cognitive system, variably distributed over a human and a machine.

  • Pierre Dillenbourg (1992), The Language Shift: a mechanism for triggering metacognitive activites, in P.Winne and M.Jones (Eds.) Adaptive Learning Environments, Berlin: Springer Verlag, 287-315.

    Abstract This chapter presents a metaphor for designing educational computing systems (ECSs) that progressively transfer to the learner an increasing amount of control in the problem solving process. The continuous variation of learner's control is segmented in a few levels. The transition from some level i to the next higher level i+1 results from the internalization of the concepts necessary to control the activities at level i. The use of reflection tools is proposed for supporting the internalization process. These reflection tools reify the control features of the learner's activities, i.e. they make concrete some abstract features of her behaviour. The next level is reached when the learner is able to use aspects refied at level i to interact with the system at level i+1. A same control concept is hence used firstly as a description language (by the system) at some level i and then as a command language (by the learner) at the level i+1. This language shift mechanism elevates the learner's level of control and her level of abstraction. It is described by analogy with an elevator that would move inside a pyramid. A floor of the pyramid corresponds to some control level. We use a formal notation to look inside the language shift mechanism and relate it to various psychological theories and current ECSs.

  • Pierre Dillenbourg, Patrick Mendelsohn & Daniel Schneider (1994) The distribution of pedagogical roles in a multi-agent learning environment in R. Lewis & P. Mendelsohn.(Eds) Lessons from Learning (p. 199-216). Amsterdam: North-Holland.

    Abstract We describe a learning environment (MEMOLAB) that illustrates the distribution of roles among several agents. The learner solves problems in interaction with an expert, i.e. an agent who is able to solve the same problems through the same interface. The degree of assistance provided by the expert is tuned by another agent, the tutor, which monitors the interaction. MEMOLAB includes several tutors corresponding to various teaching styles. These tutors are selected by their superior, called `the coach'. This distribution of roles between the agents has been conceived in such a way that some agents (the tutors and the coach) are not directly concerned by the specific teaching domain and hence can be reused to build other learning environments. The set of domain-independent components constitute ETOILE, an Experimental TOolbox for Interactive Learning Environments. Its originality is that authors do not build a software application by writing questions and feedback, but by designing domain-specific agents that will interact with the agents provided by the toolbox.

  • Pierre Dillenbourg, Michael Baker, Agnes Blaye and Claire O'Malley (to appear) The evolution of Research on Collaborative Learning (postscript file) In Spada and Reimann (Eds) Learning in Humans and Machines.

    Abstract For many years, theories of collaborative learning tended to focus on how individuals function in a group. More recently, the focus has shifted so that the group itself has become the unit of analysis. In terms of empirical research, the initial goal was to establish whether and under what circumstances collaborative learning was more effective than learning alone. Researchers controlled several independent variables (size of the group, composition of the group, nature of the task, communication media, and so on). However, these variables interacted with one another in a way that made it almost impossible to establish causal links between the conditions and the effects of collaboration. Hence, empirical studies have more recently started to focus less on establishing parameters for effective collaboration and more on trying to understand the role which such variables play in mediating interaction. In this chapter, we argue that this shift to a more process-oriented account requires new tools for analysing and modelling interactions.

  • Pierre Dillenbourg (1995) The role of artificial intelligence techniques in training software CBT Forum 1/95, pp. 6-10.

    Abstract This paper does not attempt to review the large number of ideas, techniques or systems developed during the last 15 years of research in 'artificial intelligence and education (AI&Ed). The reader interested in this topic can read Wenger's synthesis (1987), which is not recent but gives an excellent overview of the ideas and principles developed in AI&Ed. We focus here on a body of work which is now rather stable and constitutes the core part of AI&Ed. It that can be summarized in three points:

    1. The major contribution of AI to educational and training software is the possibility to model expertise. This expertise is the main feature of AI-based courseware: the system is able to solve the problems that the learner has to solve. The system is knowledgeable in the domain to be taught. Of course, other computing techniques can produce a correct solution. The interest of AI techniques is less their ability to produce a correct solution than the way that this solution is constructed. For instance, some complex AI systems have been design to model the resolution of simple subtraction such as '234-98', while any computer language can produce the correct solution (Burton & Brown, 1982).
    2. This modelled expertise enables the system to conduct interactions that could be not conducted if the system worked with pre-stored solutions. Since artificial intelligence was originally intended to reproduce human intelligence, the techniques available for modelling expertise are to some extent human-like. Actually, the educational use of AI techniques does not require that these techniques are the prefect image of human reasoning. More modestly, it requires that AI techniques support expert-learner interactions during problem solving. Some degree of similitude may be necessary if we want the expert to talk about its expertise in a way which can be understood by the learner. For instance, neural network techniques are considered as a more detailed account of human reasoning than the symbolic techniques used in expert systems. Nevertheless, the use of neural networks in courseware raises the interaction issue: how does the system communicate with the learner about the knowledge encompassed in each of its nodes? From the courseware perspective, the quality of AI techniques is not their degree of psychological fidelity but the extent to which they support interactions which are interesting from a pedagogical viewpoint.
    3. The types of interactions supported by AI techniques are important for some learning objectives. These interactions are especially relevant when the goal is to acquire complex problem solving skills. Other learning objectives can be pursued with simpler interactions techniques, like multiple-choice questions. Since the development of an AI-based software is more costly that standard courseware (especially, those designed with advanced authoring tools), these techniques should be used only when they are really required.
    This paper explains these three points, especially the link between the model of expertise and the types of interactions. This link is bi-directional: the model supports some interactions, but, conversely, the targeted interactions impact on the way expertise is represented in the system.

    (Almost) the same paper in french

  • Pierre Dillenbourg and Silvere Martin-Michiellot (1995) Le role des techniques d'intelligence artificielle dans la formation. CBT Forum 1/95, pp. 6-10.

  • Pierre Dillenbourg (1994) Evolution épistémologique en EIAO 1 (1), pp. 39-52.

    Résumé . Le concept de connaissance a fortement évolué au cours des dernières années. La connaissance n'est plus perçue comme une substance, mais comme une capacité à interagir. J'illustre cette évolution épistémologique par deux exemples: l'explication et la modélisation de l'apprenant. Initialement considérée comme la transmission d'une trace du raisonnement de l'expert, l'explication est aujourd'hui vue comme le résultat d'une construction commune. De façon similaire, le diagnostic n'est plus considéré comme une photographie neutre des connaissances du sujet, mais comme le résultat d'un processus interactif de compréhension mutuelle. AbstractThe concept of knowledge has recently evolved from a substance to a capacity to interact. I illustrate this epistemological evolution with two examples: explanation and learner modelling. Explanation is not perceived any more as transmiting the trace of the expert's reasoning, but as the result of a shared construction. Similarily, a diagnosis is not considered any more as a neutral snapshot of the learner's knowledge, but as the result of a mutual understanding process.

  • Pierre Dillenbourg & François Lombard (à paraître) Critique du langage-auteur "Authorware" , Revue EPI.

    Résumé .Cet article présente quelques points forts et points faibles du langage-auteur Authorware Professional, produit par la société Macromedia. Il s'agit d'un langage de haut niveau spécialisé pour la conception de logiciels éducatifs, dans la lignée des langages Tutor, Pilot et autre TenCORE. Authorware est plus spécialisé que des systèmes tels que Hypercard, Visual Basic ou Toolbook qui n'ont pas été conçus spécifiquement pour développer des applications éducatives. Il se différencie aussi de produits tels que Director qui permettent de créer des présentations: ces derniers sont plus performants sur le plan des effets visuels et sonores (surtout les animations 3D), mais sont moins riches sur le plan de l'interaction. Authorware fonctionne sur Mac et sous Windows et possède une interface auteur qui est pratiquement identique sur les deux machines.Un langage-auteur doit concilier trois caractéristiques pourtant partiellement contradictoires: la facilité, la productivité et la puissance. Nous illustrons ces trois caractéristiques par différents aspects d'Authorware.

  • Pierre Dillenbourg & Patrick Jermann (à paraître) Le paradoxe de la machine 'sociale' INTERFACE. Aussi disponible en version HTML

    Résumé Selon de noires augures, l'ordinateur allait déshumaniser les classes et appauvrir les relations sociales entre élèves. Ces sombres prédictions sont contredites par les pratiques actuelles. D'une part, on assiste à l'essor de la télématique, laquelle est intrinsèquement dédiée aux relations inter-utilisateurs. Certaines expériences utilisant la télématique dans l'enseignement ont montré que les relations médiatisées par le canal restreint du câble n'en étaient pas pour autant plus "froides" que les interactions en face à face [1]. D'autre part, les didacticiels, autrefois conçus comme outils d'individualisation, sont aujourd'hui conçus pour stimuler les interactions entre utilisateurs. C'est ce paradoxe que nous analysons dans cet article. Mais reprenons l'histoire à ses débuts...

    1996 papers

  • Pierre Dillenbourg (1996) From mutual diagnosis to collaboration engines: Some technical aspects of distributed cognition (HTML version) Talk to be presented at the 7th Conference on Artificial Intelligence and Education. Washington, August 1995 (postscript version here). A transcript of this will appear in the Journal of AI in Education.

    Abstract This contribution is based on the development and the evaluation of two learning environments in which the learner had to collaborate with the machine. This experience revealed that existing knowledge-based techniques are not appropriate to support 'real' collaboration, i.e. to cover a range of flexible, opportunistic and robust interactions which enable to agents to build a shared understanding of a problem. I argue for the design of collaboration engines which integrate dialogue models at the rule instantiation level. The role of dialogue models is not simply to improve the interface. The challenge is to develop models which account for the role of dialogues in problem solving and learning. Such models would reflect current theories on 'distributed cognition', one of the approaches placed under the 'situated cognition' umbrella. Most of the implications of these theories to the design of interactive learning environments (ILEs) that have been discussed so far concern the choice of methods (e.g. apprenticeship - Newman, 1989 - or project-oriented group work - Goldman et al, 1994) or software engineering (e.g. participatory design - Clancey, 1993). I address here the implications of these theories at a more technical level. I structured the argument as a discussion with myself, which is quite natural within the distributed cognition approach where interacting agents are viewed as forming a single cognitive system.

  • .Pierre Dillenbourg and Michael Baker (1996) Negotiation Spaces in Human-Computer Collaborative Learning. To appear in the Proceedings of COOP'96. (Juan-Les-Pins, France, June) (Available in postscript )

    Abstract. This paper compares the negotiation processes in different learning environments: systems where an artificial agent collaborate with the human learner, and systems where the computer supports collaboration between two human users. We argue that, in learning context, collaboration implies symmetry between agents at the design level and variable asymmetry at the interaction level. Negotiation is described as a collection of different spaces defined with seven dimensions: mode, object, symmetry, complexity, flexibility, systematicity and directness. We observed that human-human negotiation jumps between spaces, switching easily between modes of negotiation, connecting the various objects of negotiation while the 'disease' of human-computer collaborative systems was to be fixed within one negotiation space.

  • . Pierre Dillenbourg, David Traum & Daniel Schneider (1995) Grounding in Multi-modal Task-Oriented Collaboration Paper accepted in the EuroAI&Education Conference (Lisbon, Sept. 96)

    Abstract This paper describes the first results of a series of experiments on multi-modal computer-supported collaborative problem solving. Pairs of subjects have to solve a murder story in a MOO environment, using also a shared whiteboard for communication. While collaboration if often described as the process of building a shared conception of the problem, our protocols show that the subjects actually create different shared spaces. These spaces are connected to each other by functional relationship: some information in space X has to be grounded in order to ground information is space Y. The reason to dissociate these spaces is that the grounding mechanisms are different, because the nature of information to be grounded is itself different. The second observation concerns the modality of grounding. We expected that subjects would use drawings to ground verbal utterances. Actually, they use three modes of interaction: (dialogue, drawing, but also action in the MOO environment) in a more symmetrical way. Grounding is often performed across different modes (e.g. an information presented in dialogue is grounded by an action in the MOO).

    My private life...

    Getting in touch ...

            postalAddress         TECFA, FPSE
                                  Université de Genève
                                  9, ROUTE DE DRIZE, BAT D
                                  CH-1227 CAROUGE
            telephoneNumber       (+41) 22 705 96 93
            electronic mail       pdillen@divsun.unige.ch
            roomNumber            D 309