Creating courseware with a traditional authoring tool means preparing a scenario. This scenario is composed of activities, often based on the canonical schema: question - answer - feedback. The author prepares each question, each response analysis process and each the feed-back item. He specifies in a detailed script how these activities link to each other. In ETOILE, the curriculum grains are much larger. The curriculum is a sequence of microworlds plus a set of domain-specific agents.Those agents will interact both with the learner and with the pedagogical agents (tutors) included in the toolbox. The originality of ETOILE is here: designing a system is not preparing questions, it is adding new agents to those included in the toolbox and creating the context (the microworld) where those agents and the learner interact.
The backbone of ETOILE is the curriculum. All knowledge has to be plugged into this structure. The curriculum is a network of goals, connected by prerequisite links. Each goal is an object with four slots:
This representation of goals results from our design principles. Each goal has its own interface and expert, this feature supporting the particular architecture of MEMOLAB, especially the language shift principle. However, the author may also choose to use the same expert and/or the same interface for several goals. The designer's task is not automated. The implementation of an ILE with ETOILE follows the following steps:
- The problems. A goal defines a skill by specifying a set of problems that the learner must be able to solve.
- The interface. Learners solve problems through a goal-specific interface. This interface is basically one or more windows, with a set of commands (menus, buttons, moves,...).
- The domain expert(s). An expert for goal-X is a rulebase able to solve any problem stored in goal-X.
- The hypertext. A goal includes a hypertext chapter that the learner may explore freely and where he can find any information relative to the skills he must acquire.
In summary, the author's job includes three kinds of tasks: (i) creating subclasses and instances of classes defined in ETOILE, (ii) building the interface, its objects and commands, and (iii) writing rule bases. The first task is easy, the second is completely in the developer's hands, but the third one is more complex and requires a good understanding of our inference engine's workings. Hence, we developed some tools that support the design of expert rule bases.
- The developer defines the curriculum by creating a set of instances of the class `goals' and by specifying prerequisite relationships among those goals (pointers from one goal to another one).
- For each goal, the developer creates a set of instances of the class `problems'. He may create a subclass of problems (e.g. "equations") into which he adds the domain-specific information needed by the expert or the simulation. For each problem, the developer specifies its relative difficulty (with respect to the other defined problems) by an integer between 1 and 5.
- The developer creates the hierarchy of domain-specific objects (classes and instances) that will be manipulated by the learner and the expert during problem solving.
- For each goal, the developer defines one or more application frames, i.e. windows wherein the learner will solve problems. The application frame is a concept from the Common Lisp Interface Manager (CLIM). It gathers all the information that defines a situation, i.e. the window, the command tables, etc. Learner commands are stored in command tables. Two goals may share the same frame. Some components of a frame may be large: for instance, the simulation used in MEMOLAB, though it is a large piece of code, is only a subcomponent shared by the microworlds.
- For each goal, the developer creates an `expert', i.e. a rulebase that is able to solve the problems stored in that goal.Within the framework of ETOILE, the conditions of the expert rules must refer to the problem state displayed on the screen and the conclusions must include commands that change something on the problem display (commands that belong to the learner command table). If for the particular domain, the same problem can be solved by different approaches that can not easily be covered by a single rulebase (e.g. multiple viewpoints), the developer should define several experts with respect to a same goal. If several experts are available, the active expert will be selected by the coach with the same criteria he uses for selecting tutors (efficiency and preferences). However, if the expertise of one expert is qualitatively or quantitatively superior to the expertise of another, then these experts should be associated with different goals. We aim to extend this architecture to include other kinds of domain specific agents, such as computerized collaborative learners.
- The developer writes one or more hypertexts. For each goal, he specifies a hypertext node to be considered as the entry point for displaying theory related to that goal. The hypertexts must be structured in multiple granularity layers, in such a way that the learner can deepen some points and pass quickly over others. The hypertext should be integrated with the rest of the application wherever the learner may need information in a hypertext format. This is done by including `external buttons' (that directly open a hypertext to a specific node) at various places in the application frames (step 4).
- The developer anticipates the learner's plausible main mistakes and he prepares `repair rules' that will enable the expert to correct those mistakes. He associates those repair rules with hypertext nodes in order to provide the learner with some information relevant for his mistake.
As we said in the beginning of this report, we did not intend to produce a real authoring interface. We simply added a few functions that enable the author to activate or disactivate an agent (tutor, coach or expert), to reset its rulebase, to test an expert non-interactively (i.e. to check whether an expert is able to solve a problem without any interaction with the learner). We also created an interface that allows the developer to follow step-by-step the execution of a rulebase. This tool is illustrated by Figure 8.
Figure 8: The `Authorview' window enables the developer to monitor the execution of a rulebase and the activation of the rule bases of various agents.