Home | Travaux STAF | Fiche-lecture (Wenger Etienne) - STAF11 STAF-E

my name

FICHE-LECTURE
ARTIFICIAL INTELLIGENCE and TUTORING SYSTEMS
Computational and Cognitive Approaches to the Communication of Knowledge

by Etienne Wenger (1987)
Chapter 6 / Existing CAI traditions: other early contributions pp. 101-122

Contents

Introduction
EXCHECK tutor
INTEGRATION tutor
BIP tutor
University of Leeds' group
QUADRATIC tutor
Self's course
Bibliography


Introduction

This chapter presents two separate projects that existed before SCHOLAR (a classic of Intelligent Tutoring Systems) and evolved out of established CAI traditions:

  1. One line of research, conducted at the Institute for Mathematical Studies at Stanford University (IMSSS), was geared toward the production of complete curricula for use in real settings.
  2. The other, at the University of Leeds in England, dealt with the automation of intelligent teaching decisions.
They made their contributions via their own natural development trying to become more sophisticated and they are more focused in pedagogiacal decisions' algorithms than for models on target expertise.

Early attempts to tailor problem-solving experiences
IMSSS has had a long tradition of research in educational computer use. Systems have been developed for teaching in such varied domains as logic, axiomatic mathematics and foreign languages. Not to mention the computer speech generation with their system MISS that contributed to the development of CAI.
We are going to explore the following tutors:

Even though these systems are not very typical of the ITS paradigm, -they were motivated more by interest in educational issues- they have had some direct influence on the field. A distinguishing feature is the emphasis given to large experiments with systems in teaching contexts, and to gathering and analyzing data about their performance.


TOP

EXCHECK tutor

Classic and proven useful in class (the core of un undergraduate course at Stanford for years), creates an intelligent interface with a powerful model of domain expertise to provide a learning environment where the student can get feedback during problem solving. It emulates human proof techniques with macro-operators that make use of a theorem prover while bringing to bear knoeledge specific to the domain of proofs in set theory. It communicates with the student via a formal language of abbreviations.

Conclusion It lacks most of the features of AI such as: it doesn't form a global model of the student and it doesn't use pedagogical strategies to make its' interventions contextually relevant and effective.
For the domain of mathematical proofs, this is a nontrivial achievement. That is, to have a friendly environment, get an intelligent feedback and have student's work verified in terms they understand.


TOP

INTEGRATION tutor

(doctoral dissertation of Ralph Kimball 1973, 1982) It uses matrices of probabilistic values to represent judgemental knowledge. The domain expertise is represented as a matrix that relates all problem classes to all solution methods.
Each matrix element is a value indicating the probability of applying a given problem-solving approach to a given problem class, thus generating a subproblem in a new class. Student's knowledge is also matrix represented, compared to the expert.
The simple language interface basically consists of multiple-choice questions, where the tutor maintains full control over the interaction.
For diagnosis, the system updates the student's matrix. Like that, Kimball claims that we can get precise measurements of student learning revealing its' discontinuities.
The system adopts as its standard the student's approach when it leads to a better solution than the experts, kind of a self-improving.
To conclude, the advantages of this simple idea are:

Conclusion Unfortunately, it did not receive the recognition it deserves. However, in practice, a reasonable tutorial interaction can be achieved with probabilities as long as explanations are not required.


TOP

BIP (Basic Instructional Program) tutor

It is presented as a "problem-solving laboratory" for introductory programming classes. It attempts to individualize the instruction by selecting the tasks from a pool of 100 sample problems.
It's representation is more traditional AI, however it's not really an active programming tutor. It's Curriculum Information Network (CIN) is more important because it provides a complex representation of the curriculum highlighting pedagogically relevant relations between topics.

BIP-I
The curriculum is divided into 3 conceptual layers from top to bottom:

  1. Techniques, central issues of expertise
  2. Skills, low-level knowledge units (not internally ordered, not mutually disjoint) and
  3. Tasks, to exercise skills.
For the problem selection, the goal is to find the problem that exercises the greatest number of skills in the required set without including any skills beyond the student's reach. If there cannot be found any problem for the given skills, it is needed to add a new problem in the curriculum.

Conclusion After a test on two groups of students with the same tutor but the one with the above task strategy and the other with the predetermined typical branching of CAI, the BIP-I group performed significantly better.

BIP-II
It refines and augments the information contained in the CIN, by ordering the skills and organizing them to networks themselves. Skills are now connected by pedagogical links also, including analogical relations, functional dependencies and relative difficulty. All these, in a second network whose nodes are the primitive elements of the domain.
All the rest, are similar to BIP-I, especially the task selection procedure, only now is more refined and precise in the construction of skills needed to exercise. So, the sequences proposed are different, especially if the student performs well initially.

Conclusion Even though, these new links are supposed to inferre student's knowledge, their real potential for use in diagnosis and remediation has not been explored.

Problem-solving guidance: limited diagnostic information

As an intelligent programming tutor, BIP is admittedly incomplete. For advices, it can give only hints relevant to the tasks since it has no knowledge of design, coding or debugging.
As far as it cocerns feedback, it is unable to diagnose logical errors as it tests only the input output results without analyzing the algorithm. It can only check the syntax of the program by going through all the key words included.

TOP

University of Leeds' group

In England, while SCHOLAR was being developed at BBN, a group at the Computer-based Learning project at the University of Leeds came to similar conclusions after working on advanced CAI systems for teaching medical diagnosis and arithmetic operations. Hartley and Sleeman (1973) tried to define some characteristics of "Intelligent Teaching Systems".
Their classification into 4 classes concentrates on the teaching process. Between CAI and ITS they see an intermediate type of generative systems that generate tasks by putting problems together. Also, they divide ITS into 2 nondisjoint categories.
One of them, is adaptive systems or more specifically self-improving systems that refine their knowledge by evaluating their own performance.
The following two tutors that are presented are the pioneering work of two of their students:

  1. Quadratic tutor by Tim O'Shea is interested in the design of self-improving systems that monitor their own performance while
  2. John Self attempts to define teaching decisions formally in terms of a student model.


TOP

QUADRATIC tutor

The domain is the solution of simple quadratic equations of the form x2 + c = bx based on the Vieta's general root theorem.
It's a self-improving tutor. It has the ability to set up experiments using variations of its strategies and to adopt those that seem to produce the best results. It uses a database (also self-updated) of possible modifications and corresponding expected results which he names "theory of instruction" for the domain.
For improvements to be possible, not only it is necessary to have an explicit and modular representation of the teaching strategies, but the tutorial objectives must be clearly defined.
So, there are four distinct tutorial goals:

  1. increase the number of students completing the session,
  2. improve their score on the post-test,
  3. decrease the time taken by the students to learn the rules and their combinations and
  4. decrease computer time used in the process.
The system has 3 sources of information:
  1. a task difficulty matrix, for the selection of new problems with well-defined teaching goals (domain fixed),
  2. a student model, which is a set of hypotheses of student's current knowledge of rules and their combinations may be (regularly updated) and
  3. tutorial strategies, the core of the tutor, a set of production rules.

Conclusion It's a first attempt at automating the educational research, although it didn't cause any dramatic improvements in the domain but it has been well accepted by the students that endeavoured learning and outperformed.
We've got also to point out the lack of sufficient statistical evaluations for the modifications of the teaching strategies.
The most foundamental limitation of the system is that learning is empirical and not analytical, because it is impossible to reason about rules without knowing the principles they embody. We can say that it is an "empirical theory of instruction".


TOP

Self's course

The domain is the acquisition of simple conjuctive concepts in a relational language close to first-order logic.
Taking an analytical approach in contrast with the empirical experiments of O'Shea, Self is interested in formalizing teaching actions in terms of a student model that is predicted as well as inspectable which started an important trend in the field.

Conclusion It's learning model constructs optimal instructional sequences in an artificial domain. It's an elegant piece of research, but still remains a laboratory experiment because it doesn't address many difficult issues, notably diagnosis.


TOP

Bibliography

Suppes, P.

(1981) University-level Computer-assisted Instruction at Stanford: 1968-1980.
Institute for Mathematical Studies in the Social Sciences, Stanford University, Stanford, California.

McDonald, J.

(1981) The EXCHECK CAI system. In Suppes, P. (Ed.) University-level Computer-assisted Instruction at Stanford: 1968-1980.
Institute for Mathematical Studies in the Social Sciences, Stanford University, Stanford, California.

Blaine, L.H.

(1981) Programs for structured proofs. In Suppes, P. (Ed.) University-level Computer-assisted Instruction at Stanford: 1968-1980.
Institute for Mathematical Studies in the Social Sciences, Stanford University, Stanford, California.

Smith et al.

(1975) Computer-assisted axiomatic mathematics: informal rigor. Blaine, L.H,; and Smith, R.L. (1977) Intelligent CAI: the role of the curriculum in suggesting computational models of reasoning.
Proceedings of the National ACM Conference, Seattle, Washington, pp. 241-246. Association for computing Machinery, New York.

O'Shea, T.

(1979b) A self-improving quadratic tutor.
Int Jrnl Man-Machine Studies, vol. 11, pp. 97-124. (Reprinted in Sleeman, D.H.; and Brown, J.S. (Eds) Intelligent Tutoring Systems. Academic Press, London.)

O'Shea et al.

(1984) Tools for creating intelligent computer tutors.
In Elithorn, A.; and Barneji, R. (Eds) Human and Artificial Inteligence. North-Holland, London.

Heines, J.M.; and O'Shea, T.

(1985) The design of rule-based CAI tutorial.
Int Jrnl Man-Machine Studies, vol. 23, pp. 1-25.

Self, J.A.

(1974) Students models in CAI.
Int Jrnl Man-Machine Studies, vol. 6, pp. 261-276

Self, J.A.

(1977) Concept teaching. Artificial Intelligence,
vol. 9, no. 2, pp. 197-221

Back to "Travaux" page

© Vivian Synteta (11/04/99) updated 11/04/99
synteta8@etu.unige.ch