• Aucun résultat trouvé

An Intelligent Tutoring System for Entity Relationship Modelling

N/A
N/A
Protected

Academic year: 2022

Partager "An Intelligent Tutoring System for Entity Relationship Modelling"

Copied!
45
0
0

Texte intégral

(1)

HAL Id: hal-00197310

https://telearn.archives-ouvertes.fr/hal-00197310

Submitted on 14 Dec 2007

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

An Intelligent Tutoring System for Entity Relationship Modelling

Pramuditha Suraweera, Antonija Mitrovic

To cite this version:

Pramuditha Suraweera, Antonija Mitrovic. An Intelligent Tutoring System for Entity Relationship Modelling. International Journal of Artificial Intelligence in Education (IJAIED), 2004, 14, pp.375- 417. �hal-00197310�

(2)

An Intelligent Tutoring System for Entity Relationship Modelling

Pramuditha Suraweera & Antonija Mitrovic, Intelligent Computer Tutoring Group, Computer Science Department, University of Canterbury, Private Bag 4800, Christchurch, New Zealand

{psu16,tanja}@cosc.canterbury.ac.nz

Abstract. The paper presents KERMIT, a Knowledge-based Entity Relationship Modelling Intelligent Tutor. KERMIT is a problem-solving environment for the university-level students, in which they can practise conceptual database design using the Entity-Relationship data model.

KERMIT uses Constraint-Based Modelling (CBM) to model the domain knowledge and generate student models. We have used CBM previously in tutors that teach SQL and English punctuation rules. The research presented in this paper is significant because we show that CBM can be used to support students learning design tasks, which are very different from domains we dealt with in earlier tutors. The paper describes the system's architecture and functionality. The system observes students' actions and adapts to their knowledge and learning abilities. KERMIT has been evaluated in the context of genuine teaching activities. We present the results of two evaluation studies with students taking database courses, which show that KERMIT is an effective system. The students have enjoyed the system's adaptability and found it a valuable asset to their learning.

INTRODUCTION

Intelligent Tutoring Systems (ITS) have been proven to be very effective in domains that require extensive practice (Corbett et al., 1998; Koedinger et al., 1997; Mitrovic & Ohlsson, 1999). In this paper, we present KERMIT, a Knowledge-based Entity Relationship Modelling Intelligent Tutor. KERMIT (Suraweera & Mitrovic, 2001) is an ITS designed and implemented for teaching database modelling. It is developed as a problem-solving environment where the system presents a description of a scenario for which the student has to design a database.

This research is significant because it extends our study of Constraint-Based Modelling (CBM). In previous work we have shown that CBM is capable of supporting students' learning in two different domains: SQL, a declarative language, (Mitrovic & Ohlsson, 1999; Mitrovic et al., 2001) and punctuation and capitalization rules in English (Mayo & Mitrovic, 2001). Database design is a very different domain. As any other design domain, it involves open-ended tasks for which there are no well-formed problem solving algorithms. On the contrary, the learner is given an abstract definition of a good solution. In database modelling, a good solution is defined as an ER schema that matches the requirements, and satisfies all the integrity rules of the chosen data

(3)

model. As we show in section 2, these conditions are very vague. In this paper we show that CBM can be applied successfully to support design tasks too.

We start by briefly describing database design, and the Entity Relationship model used in KERMIT. Section 3 reviews related work. In section 4, we describe the overall architecture of the system, and present details of the user interface, knowledge base, student modeller and the pedagogical module.

The effectiveness and the students' perception of KERMIT were evaluated during two empirical evaluation studies. These two studies, presented in section 5, prove the effectiveness of the system for student's learning. Finally, we present the conclusions and the directions for future work in the last section.

DIFFICULTIES OF LEARNING DATABASE MODELLING

Databases have become ubiquitous in today's information systems. Learning how to develop good quality databases is a core topic in Computer Science curriculum. Database design is a process of generating a model of a database using a specific data model. A model of a database (also known as a database schema) evolves through a series of phases. The initial phase of requirements analysis enables database designers to understand the application domain, and provides input for the conceptual design phase. In this phase, the designers reason about the application domain as described in the requirements, and use their world knowledge to discover important relationships between various data items of importance for the database. The conceptual schema of a database is a high-level description of the database and the integrities that data must satisfy. This high-level schema is later transformed into a logical schema (such as relational schema) which can be implemented on a Database Management System (DBMS), and finally into a physical schema, which contains details of data storage.

The quality of conceptual schemas is of critical importance for database systems. Most database courses teach conceptual database design using the Entity-Relationship (ER) model, a high-level data model originally proposed by Chen (1976). The ER model views the world as consisting of entities, and relationships between them. The entities may be physical or abstract objects, roles played by people, events, or anything else data should be stored about. Entities are described in terms of their important features, called attributes in the terminology of the ER model. Relationships represent various associations between entities, and also may have attributes.

Let us illustrate the process of designing a database on a simple example. A student is given the following description of a target database:

You are to design a database to record details of artists and museums in which their paintings are displayed. For each painting, the database should store the size of the canvas, year painted, title and style. The nationality, date of birth and death of each artist must be recorded. For each museum, record details of its location and speciality, if it has one.

From the description, it is obvious that artists, museums and paintings are of importance.

Therefore, the student may start by drawing the entities first. Each entity is described in terms of

(4)

some attributes. For example, each painting would be described by its title, style and the size of canvas. All three attributes are explicitly mentioned in the requirements. For each artist, we need to know his/her name, nationality, date of birth and death. The artist's name, however, is not explicitly listed in the text. Finally, for each museum, we need to know its location and speciality.

The student also needs to identify the relationships between these three types of entities.

Each painting is displayed in a museum, and this is mentioned in the first sentence of the problem text. However, the other necessary relationship is not mentioned explicitly: the relationship between the painting and the artist. The student needs world knowledge in order to identify this relationship.

Once when all concepts are identified, the student needs to determine the integrities of the model. Each entity type must have at least one key attribute, which uniquely identifies it. For the museum, that may be the museum name (which is not explicitly given in the text). Assuming there will be no two artists with the same name, the key attribute of ARTIST is his/her name. The key of PAINTING is its title, assuming each title is unique.

ER schema is usually presented in the graphical form, and the ER diagram for the museum database is illustrated in Figure 1. The ER model also contains two integrities defined on relationships. Each entity type that participates in a relationship can participate totally (shown by the double line on the diagram) or partially, and the number of instances participating in an instance of the relationship type is known as cardinality (shown by 1, N and M in Figure 1). The additional complexity is introduced by relationships possibly involving more than two entity types (higher degree relationships), and having simple, composite or multivalued attributes.

Fig. 1. The ER diagram for the Museum database

As can be seen from this simple case, there are many things that the student has to know and think about when developing an ER diagram. The student must understand the data model used, both the basic building blocks available and the integrity constraints specified on them. In real situations, the text of the problem would be much longer, often ambiguous and incomplete. To identify the integrities, the student must be able to reason about the requirements and use his/her own world knowledge to make valid assumptions. The ER modelling is not a well-defined process, and the task is open ended. There is no algorithm to use to derive the ER schema for a

ARTIST

PAINTING MUSEUM

BDate Died

Name Nationality

Location

Speciality Name

Year

Canvas ID

Style Title

PAINTED

EXHIBITED _IN

1

N N 1

(5)

given set of requirements. There is no single, best solution for a problem, and often there are several correct solutions for the same requirements.

The authors have been involved in teaching database courses for a number of years, and in our experience students typically have many problems learning to design databases. Experiments conducted by Batra and colleagues (1990, 1994) have shown that although novices face little difficulty in modelling entities, they are challenged when modelling relationships. From our experiences, students find the concepts of weak entities and higher degree relationships difficult to grasp.

Although the traditional method of learning ER modelling in a classroom environment may be sufficient as an introduction to the concepts of database design, students cannot gain expertise in the domain by attending lectures only. Even if some effort is made to offer students individual help through tutorials, a single tutor must cater for the needs of the entire group of students, and it is inevitable that they obtain only limited personal assistance. The difficulties of targeting information at the appropriate level for each student's abilities also become apparent in group tutorials, with weaker students struggling and more able students not challenged sufficiently.

Therefore, the existence of a computerized tutor, which would support students in acquiring database design skills, would be highly important.

RELATED WORK: INTELLIGENT TUTORS FOR DB MODELLING

Educational systems with problem-solving environments for DB modelling can be used as educational tools for enhancing students' learning. Ideally these teaching systems would offer the student a vast array of practice problems with individualised assistance for solving them, allowing students to learn at their own pace. There have been very few research attempts at developing such teaching systems for DB modelling. This section outlines two such systems:

ERM-VLE (Hall & Gordon, 1998) and COLER (Constantino-Gonzalez & Suthers, 2000;

Constantino-Gonzalez et al., 2001).

ERM-VLE: a virtual reality environment

ERM-VLE (Hall & Gordon, 1998) is a text-based virtual learning environment for ER modelling.

In text-based virtual reality environments, users communicate with one another and with the virtual world exclusively through the medium of text. The objective of the learner in ERM-VLE is to model a database for a given problem by navigating the virtual world and manipulating objects. The virtual world consists of different types of rooms such as entity creation rooms and relationship creation rooms. The authors claim that the organisation of the environment reflects the task structure, and encourages a default order of navigation around the virtual world. The student issues commands such as pick up, drop, name, evaluate, create and destroy to manipulate objects. The effect of a command is determined by the location in which it was issued. For example, a student creates an entity whilst in the entity creation room. The evaluation command provides hints for modelling the ER schema.

The interface of ERM-VLE consists of several panes. The Scenario pane contains the requirements for the database being modelled. The 'Current ERM' pane provides a graphical representation of the ER model that the user is building. The graphical representation is

(6)

dynamically updated to reflect the activities of the student, but the student does not directly interact with the graphical representation. The student interacts with the virtual world solely by issuing textual commands. The 'ERM world' pane contains a record of past interactions between the student and the world.

The solution of each problem is embedded in the virtual world. Correspondences between the phases of the scenario and constructs of the ER model are stored in the solution. The learner is only allowed to establish the system's ideal correspondences. If the student attempts to establish an association that does not comply with the system's solution, the system intervenes and informs the student that the association is not allowed.

When the system was evaluated with a group of experienced DB designers and novices, the experienced designers felt that the structure of the virtual world had restricted them (Hall &

Gordon, 1998). On the other hand, novices felt that they had increased their understanding of ER modelling. However, these comments cannot be treated as substantial evidence as to the effectiveness of the system since the system has not been evaluated properly.

ERM-VLE restricts the learner since he or she is forced to follow the identical solution path that is stored in the system. This method has a high tendency to encourage shallow learning as users are prevented from making errors and they are not given an explanation about their mistakes. Moreover, a text-based virtual reality environment is a highly unnatural environment in which to construct ER models. Students who learn to construct ER models using ERM-VLE would struggle to become accustomed to modelling databases outside the virtual environment.

COLER: collaboratively building ER diagrams

COLER (Constantino-Gonzalez & Suthers, 1999; Constantino-Gonzalez & Suthers, 2000;

Constantino-Gonzalez et al., 2001) is a web-based collaborative learning environment for ER modelling. The main objectives of the system are to improve students' performance in ER modelling and to help them to develop collaborative and critical thinking skills. The system contains an intelligent coach that is aimed at enhancing the students' abilities in ER modelling.

COLER is designed to enable interaction between students from different places via a networked environment to encourage collaboration.

COLER's interface contains a private workspace as well as a shared workspace. The student's individual solution is constructed in the private workspace, whereas the collaborative solution of the group of students is created in the shared workspace. The system contains a help feature that can be used to obtain information about ER modelling. The students are provided with a chat window through which they can communicate with other members of the group. Only a single member can edit the shared workspace at any time. Once any modifications are completed, another member of the group is given the opportunity to modify the shared workspace. The interface also contains an opinion panel, which shows the opinion of the group on the current issue. Each member has to vote on each opinion with either agree, disagree or not sure. The personal coach resident in the interface gives advice in the chat area based on the group dynamics: student participation and the group's ER model construction.

COLER is designed for students to initially solve the problem individually and then join a group to develop a group solution. The designers argue that this process helps to ensure that the students participate in discussions and that they have the necessary raw material for negotiating differences with other members of the group (Constantino-Gonzalez & Suthers, 2000;

(7)

Constantino-Gonzalez et al., 2001). The private workspace also allows the student to experiment with different solutions to a problem individually. Once a group of students agree to be involved in collaboratively solving a problem, the shared workspace is activated. After each change in the shared workspace, the students are required to express their opinions by voting.

COLER encourages and supervises collaboration, and we believe it has the potential in helping students to acquire collaboration skills. However, it does not evaluate the ER schemas produced, and cannot provide feedback regarding their correctness. In this regard, even though the system is effective as a collaboration tool, the system would not be an effective teaching system for a group of novices with the same level of expertise. From the authors' experience, it is very common for a group of students to agree on the same flawed argument. Accordingly, it is highly likely that groups of students unsupervised by an expert may learn flawed concepts of the domain. In order for COLER to be an effective teaching system, an expert should be present during the collaboration stage.

Discussion

Intelligent tutoring systems are developed with the goal of automating one-to-one human tutoring, which is the most effective mode of teaching. ITS offer greater flexibility in contrast to non-intelligent software tutors since they can adapt to each individual student. Empirical studies conducted to evaluate ITSs in other domains have shown vast improvements in student learning.

Although ITSs have been proven to be effective in a number of domains, an effective ITS for DB modelling is yet to evolve.

ERM-VLE, the text based virtual reality environment for ER modelling, is a highly unnatural environment in which to construct ER models. Students would struggle to transfer their knowledge acquired using ERM-VLE to modelling databases for real life requirements.

Furthermore, since the solutions are embedded in the virtual environment itself, students who have used the system have complained that it was too restrictive since they were forced to follow the ideal solution path. The method of forcing the user to follow the path of the system's solution has an increased risk of encouraging shallow learning.

The collaborative learning environment, COLER, is yet to undergo a comprehensive evaluation to test its effectiveness. COLER encourages and supervises collaboration with peers in collaboratively constructing an ER model. However, the system is not capable of evaluating the group solution and commenting on its correctness. The system assumes that the combined solution agreed upon by all the members of the group is correct. This assumption may not be valid for a group of novices with similar misconceptions about ER modelling. Consequently for COLER to be an effective teaching system, a human expert must participate in the modelling exercise.

KERMIT: A KNOWLEDGE-BASED ER MODELLING TUTOR

KERMITis an intelligent teaching system that assists students learning ER modelling. The system is designed as a complement to classroom teaching, and therefore we assume that the students are already familiar with the fundamentals of database theory. KERMIT is a problem- solving environment, in which students construct ER schemas that satisfy a given set of

(8)

requirements. The system assists students during problem solving and guides them towards the correct solution by providing feedback tailored towards each student depending on his or her knowledge.

Fig. 2. A screenshot from the introduction to KERMIT

The system is designed for individual work. A student initially logs onto the system with an identifier. The system introduces its user interface, including its functionality, to first time users (Figure 2). During the problem solving stage, the student is given a textual description of the requirements of the database that should be modelled. The task is to use the ER modelling notation to construct an ER schema according to the given requirements. The ER model is constructed using the workspace integrated into KERMIT's interface. Once the student completes the problem or requires guidance from the system, their solution is evaluated by the system.

Depending on the results of the evaluation, the system may either congratulate the student or offer hints on the student's errors. The student can request more detailed feedback messages depending on their needs. On completion of a problem, KERMIT selects a new problem that best suits the student's abilities. At the completion of a session with KERMIT, the student logs out.

(9)

KERMIT was developed using Microsoft Visual Basic to run on the Microsoft Windows platform. The teaching system was developed to support the Entity Relationship data model as defined by Elmasri and Navathe (Elmasri & Navathe, 2003). The following sections discuss KERMIT's design and implementation details.

Architecture

The main components of KERMIT are its user interface, pedagogical module and student modeller (Figure 3). Users interact with KERMIT's interface to construct ER schemas for the problems presented to them by the system. The pedagogical module drives the whole system by selecting the instructional messages to be presented to the student and selecting problems that best suit the particular student. The student modeller, which is implemented using constraint based modelling (Ohlsson, 1994), evaluates the student's solution against the system's knowledge base and records the student's knowledge of the domain in the form of a student model.

C onstraint B ased M odeller

Interface

Student Pedagogical

Module

Solutions

Visio MS Models

Problems Constraints Student

Fig. 3. Architecture of KERMIT

KERMIT does not have a domain module that is capable of solving the problems given to students. Developing a problem solver for ER modelling is an extremely difficult task. There are assumptions that need to be made during the composition of an ER schema. These assumptions are outside the problem description and are dependent on the semantics of the problem itself.

Although this obstacle can be avoided by explicitly specifying these assumptions within the problem, ascertaining these assumptions is an essential part of the process of constructing a solution. Explicitly specifying the assumptions would over simplify the problems and result in students struggling to adjust to solving real world problems. Another complexity arises due to the fuzziness of the knowledge required in modelling a database. Consequently, developing a problem solver for database modelling would be difficult, if not entirely impossible.

Although there is no problem solver, KERMITis able to diagnose students' solutions by using its domain knowledge represented as a set of constraints. The system contains an ideal solution for each of its problems, which is compared against the student's solution according to the system's knowledge base (see a later section for details on the knowledge base). The knowledge base, represented in a descriptive form, consists of constraints used for testing the student's solution for syntax errors and comparing it against the system's ideal solution.

(10)

KERMIT's knowledge base enables the system to identify student solutions that are identical to the system's ideal solution. More importantly, this knowledge also enables the system to identify alternative correct solutions, i.e. solutions that are correct but not identical to the system's solution.

Fig. 4. User interface of KERMIT User interface

Students interact with KERMIT via its user interface to view problems, construct ER diagrams, and view feedback. KERMIT's interface, illustrated in Figure 4, consists of four main components. The top window displays the textual description of the current problem. The middle window is the ER modelling workspace where students create ER diagrams. The lower window displays feedback from the system in textual form. The animated pedagogical agent (the Genie) that inhabits the learning environment presents feedback verbally incorporated with animations and speech bubbles.

The top right corner of the interface contains the button for submitting solutions and obtaining feedback. The feedback content is dependent on the level of feedback. The student can either choose a specific level of feedback using a pull down list or can submit the solution again

(11)

to request more feedback. The next problem button can be used to request a new problem to work on and the system presents the student with a new problem that best suits the student's abilities.

ER modelling workspace

The workspace functions as an ER diagram-composing tool that assists students in constructing ER models. It is essential that the workspace is intuitive and flexible to use. It is also important that students do not feel restricted by the workspace and that its interface does not degrade their performance.

Table 1

Symbols available in KERMIT

Symbol Construct

Regular entity

Regular relationship Weak relationship Simple attribute Multivalued attribute Key attribute Partial key Derived attribute

Simple connector, partial participation Total participation connector

Total participation with cardinality 1 Total participation with cardinality N Partial participation with cardinality 1 Partial participation with cardinality N Weak entity

1

1 1 N

Although the ER modelling workspace is not the main focus of our research, it is an important component of the system. During the design phase we explored the possibility of incorporating an effective commercial CASE tool, developed for database design. We selected Microsoft Visio (Visio), which is a diagram composing tool that allows users to create 2D diagrams by dragging and dropping objects from a template. We developed a set of objects for

(12)

ER modelling, and integrated Visio with KERMIT by using its comprehensive set of APIs that allow external programs to control Visio and to access its internal representation of the diagram.

The objects that the students can use to draw diagrams are shown on the left of the diagram, and are also given in Table 1. When creating an entity type, the student has to select either a regular or weak one. Similarly, when creating a relationship type, the student needs to use the appropriate object: diamond for a regular relationship, or a double diamond for an identifying relationship. There are four types of attributes to choose from: simple, multivalued, derived, key or partial key. To create a composite attribute, the student needs to create each component separately and then attach it to the attribute itself. Whenever a new object is created, the system asks for it to be named, by highlighting a phrase from the problem text. Figure 5 illustrates the state of the interface immediately after the student created a regular entity type. The student selects the name and the result of that action is shown in Figure 6.

Simple connector (single line) is used to connect attributes to other attributes, relationships or entities. To specify the constraints of relationships types, the student needs to select the appropriate connector from the four available ones.

Fig. 5. The student creates an entity type

(13)

Problem description

Typical problems given to students in database modelling, such as those provided by KERMIT, involve modelling a database that satisfies a given set of requirements. The sample problem (Elmasri & Navathe, 2003) displayed in Figure 4 outlines the requirements of a database for a company. Students are required to construct ER models by closely following the given requirements. It is common practice with most students to either make notes regarding the problem or to underline phrases of the problem text that have been accounted for in their models.

Some students highlight the words or phrases that correspond to entities, relationships and attributes using three different colours. This practice is very useful as they closely follow the problem text, which is essential to producing a complete solution.

Fig. 6. The entity type with a name

KERMIT is designed to support this behaviour. Whenever a student creates a new object, the system requires that it be named, by selecting a word or a phrase from the problem text. The highlighted words are coloured depending on the type of object. When the student highlights a

(14)

phrase as an entity name, the highlighted text turns bold and blue. Similarly the highlighted text turns green for relationships and pink for attributes.

This feature is extremely useful from the point of view of the student modeller for evaluating solutions. There is no standard that is enforced in naming entities, relationships or attributes, and the student has the freedom to use any synonym or similar word/phrase as the name of a particular object. Since the names of the objects in the student solution (SS) may not match the names of construct in the ideal solution (IS), the task of finding a correspondence between the constructs of the SS and IS is difficult. For example, one may use 'HAS' as the relationship name, while another may use 'CONSISTS_OF'. Even resolutions such as looking up a thesaurus will not be accurate as the student has the freedom to name constructs using any preferred word or phrase.

This problem is avoided in KERMIT by forcing the student to highlight the word or phrase that is modelled by each object in the ER diagram. The system uses a one-to-one mapping of words of the problem text to the objects of its ideal solution to identify the corresponding SS objects.

The feature of forcing the student to highlight parts of the problem text is also advantageous from a pedagogical point of view, as the student must follow the problem text closely. Many of the errors in students' solutions occur because they have not comprehensively read and understood the problem. These mistakes would be minimised in KERMIT, as students are required to focus their attention on the problem text every time they add a new object to the diagram. Moreover, the student can make use of the colour coding to ascertain the number of requirements that they have already modelled.

Feedback presentation

The goal of the feedback is to improve the student's knowledge of the domain, and therefore it is essential that these messages are presented in an effective manner. KERMITpresents feedback in two ways: using an animated agent and in a conventional text box.

Animated pedagogical agents are animated characters that support students' learning. Studies have shown that animated pedagogical agents have a strong positive effect on students' motivation and learning (Lester et al., 1999; Johnson et al., 2000; Mitrovic & Suraweera, 2000).

The interface of KERMITis equipped with such an agent (the Genie) that presents instructional messages verbally and displays a strong visual presence using its animations, which are expected to contribute towards enhanced understanding and motivation levels. The Genie provides advice to the students, encourages them and enhances the credibility of the system by the use of emotive facial expressions and body movements, which are designed to be appealing to students. In general, the Genie adds a new dimension to the system, making it a more interesting teaching tool.

The Genie was implemented using Microsoft Agent (Microsoft) technology. The MS agent can be easily incorporated, as it is equipped with a comprehensive set of APIs. KERMITuses these to control the Genie, which performs animated behaviours such as congratulating, greeting and explaining. The Genie is very effective in introducing KERMIT's interface to a new student.

It moves around the interface pointing to elements in the interface, explaining their functionality.

The goal of this initial introduction is to familiarise new students with the interface.

Although the animated agent presents the feedback, that text disappears once the agent completes its speech. However, the student may find it useful to refer to the feedback later on while constructing the ER model. For this reason, the interface also contains a static text box (the

(15)

bottom part of the interface) that displays the feedback message until the student re-submits the solution. This is especially useful in situations where more than one error has been identified, and feedback addresses all errors, when students need to refer back to the feedback. From the pedagogical point of view, this also reduces the cognitive load, since students do not have to remember the feedback that was presented by the agent.

Problems and Ideal Solutions

KERMIT contains predefined database problems, each with a natural language description of requirements, and the ideal solution, specified by a human database expert. In this section we describe the way problems and solutions are represented in the system.

Internal representation of solutions

In order to evaluate the student's solution efficiently, it is essential for KERMIT to represent it internally in a convenient form. Since KERMIT incorporates MS Visio as its ER modelling workspace, it either must use MS Visio's internal representation of the diagram or maintain its own representation. MS Visio represents each object in the diagram as a generic object, and therefore does not distinguish between the different types of ER objects. All the objects in a diagram are arranged in a list, in the order in which they were added to the workspace.

Accessing Visio's internal representation for evaluating a student's solution against the constraint base is inefficient since constraints naturally deal with a particular group of objects such as entities or relationships. Each time a constraint in the knowledge base is evaluated, the list of objects in Visio that represents the internal structure of the diagram would have to be sequentially scanned to extract the particular group of constructs. This method is inefficient and would increase the response time. Moreover, accessing Visio's internal data structures through its APIs using a VB program is slower than accessing a data structure maintained by the VB program itself. Due to these drawbacks, KERMIT dynamically maintains its own internal representation of the ER diagram.

KERMIT maintains two lists of objects: one for entities and one for relationships. The attributes are contained as a list within the entity or relationship object to which they belong.

Attributes that are components of a composite attribute are stored as a list within their parent attribute. Each relationship has a list of participating entity objects that keeps track of its participating entities.

Fig. 7. The ER schema for the scenario "Students live in halls"

The procedure of building the internal representation of the student's solution is based on the student's interactions. When the student adds a new object to the workspace, the system adds information about it into a corresponding list. All syntactically illegal constructs are recorded in a

STUDENT HALL

Number Name

LIVES_IN 1 N

(16)

separate list. For example, when an attribute is added to the workspace, it will be included in the illegal constructs list until it is connected to an entity or relationship. Once the attribute becomes part of an entity or relationship, it is added to the list of attributes of the entity or relationship to which it belongs.

KERMIT contains ideal solutions for all of the problems in its database. These ideal solutions (IS) are stored in a compact textual form. When a new problem is given to the student, the system reads its stored solution, parses it and builds a runtime internal representation in the memory. An ideal solution is also represented internally with objects grouped using lists during runtime, similar to the SS. KERMIT uses this representation of the IS to compare it to the SS according to its constraint base.

The stored textual version of the IS consists of two lists: an entities list and a relationships list. The entities list consists of the names of all the entities including their attributes. The entity name is followed by the attributes, which are listed within parentheses. In the case of composite attributes, the components of the attribute are listed enclosed by '|' symbols. As an example consider the entities list of the ideal solution to the scenario "Students live in halls". The correct ER schema is depicted in Figure 7, and its textual representation is outlined in Figure 8.

Entities = "STUDENT<E1>(Number<K1>),HALL<E2>(Name<K1>)"

Relationships = "LIVES_IN<R1>()-<E1>np,<E2>1t-"

Fig. 8. Internal representation of the ideal solution for the scenario "Students live in halls"

Each object in a solution is assigned a unique identifier, composed of a single character code that specifies the type of the object and an integer. For example, 'E' is used for regular entities and 'W' for weak entities. The integer in the ID makes it unique. The solution in Figure 7 contains two entities, namely STUDENT and HALL. The former entity has the identifier of E1 (see Figure 8), as it is the first entity that has been created. Its key attribute is Number, and its identifier is K1. There is only one relationship, LIVES_IN, whose ID is R1. This relationship has no attributes, and the ids of participating entities are also represented. E1 (i.e. the STUDENT entity type) participates partially in this relationship, and there may be many instances of STUDENT participating in the relationship (there can be many students living in a hall); these constraints are represented by np in Figure 8. Similarly, the HALL entity type participates totally, and there is one hall per student; this is represented by 1p.

Internal representation of problems

The text of each problem describes the requirements of a database that the student is to design.

Since KERMITdoes not possess any NLP abilities, we developed an effective internal representation that enabled the system to identify the semantic meaning of each object. As discussed previously, KERMIT forces students to highlight the word or phrase modelled by each object in their solutions. The system uses these highlighted words to form correspondences between the student's and the ideal solution.

The problem text is represented internally with embedded tags that specify the mapping of its important words to the IS objects. These tags have a many-to-one mapping to the objects in

(17)

the ideal solution. In other words, more than one word may map to a particular object in the ideal solution to account for duplicate words or words with similar meaning. The set of tags embedded in the problem text are identical to the tags assigned to the objects of the ideal solution. The tags are specified by a human expert when the problem is added to the system. They are not visible to the student since they are extracted before the problem is displayed. The position of each tag is recoded in a lookup table, which is used for the mapping exercise. Whenever the student creates an object, the corresponding tag from the problem text matching the name of the object is used to relate the object in the student's solution to the appropriate object in the ideal solution.

Knowledge base

The knowledge base is an integral part of any ITS. The quality of the pedagogical instructions provided depends critically on the knowledge base. As stated previously, KERMIT uses CBM to model the domain knowledge and generate student models. In previous work (Mayo & Mitrovic, 2001; Mitrovic & Ohlsson, 1999, Mitrovic et al. 2001, 2002) we have discussed CBM and its implementations in SQL-Tutor and CAPIT. Each constraint specifies a fundamental property of a domain that must be satisfied by any correct solution. Constraints are modular and problem- independent, and therefore easy to evaluate. Each constraint is a collection of patterns, and therefore the evaluation process is very efficient computationally.

One of the advantages of CBM over other student modelling approaches is in its independence from the problem-solving strategy employed by the student. CBM models students' evaluative, rather than generative knowledge and therefore does not attempt to induce the student's problem-solving strategy. CBM does not require an executable domain model, and therefore is applicable in situations in which such a model would be difficult to construct (such as database design or SQL query generation).

Furthermore, CBM eliminates the need for bug libraries, i.e. collections of typical errors made by students. Cognitive tutors contain buggy rules and use them to provide error-specific feedback to students (Anderson et al., 1996; Corbett et al., 1998). When a student's action matches one of the misconceptions from a bug library, a teaching system based on a bug library responds by offering an explanation of why the action is incorrect. Bug libraries are extremely difficult to acquire, as it is necessary to collect and analyse numerous instructional sessions in order to identify the misconceptions, a time-consuming and error-prone process. Secondly, the effectiveness of the system would depend on the completeness and correctness of the bug library.

If a specific misconception is missing from the bug library, the system will be able to inform the student that his/her action is incorrect, but will not be able to provide error-specific feedback.

Thirdly, bug libraries do not transfer well between different populations of students (Payne &

Squibb, 1990). If a bug library is developed for a specific group of students, other populations may have different misconceptions, and therefore the effort in developing a bug library is not justified. (Please see (Mitrovic, Koedinger & Martin, 2003) for a comparison of CBM and cognitive tutoring)

On the contrary, CBM focuses on correct knowledge only. If a student performs an incorrect action, that action will violate some constraints. Therefore, a CBM-based tutor can react to misconceptions although it does not represent them explicitly. A violated constraint means that student's knowledge is incomplete/incorrect, and the system can respond by generating an appropriate feedback message. Feedback messages are attached to the constraints directly, and

(18)

they explain the general principle violated by the student's actions. Feedback can be made very detailed, by instantiating parts of it according to the student's action.

The domain knowledge of KERMIT is represented as a set of constraints used for testing the student's solution for syntax errors and comparing it to the ideal solution. Currently KERMIT's knowledge base consists of 92 constraints. Each constraint consists of a relevance condition, a satisfaction condition and feedback messages. The feedback messages are used to compose hints that are presented to the students when the constraint is violated.

The constraints in the knowledge base have to be specified in a formal language that can be parsed and interpreted by the system. It is imperative that the formal representation is expressive enough to test the subtle features of student solutions and compare them to ideal solutions. We have chosen a simple Lisp-like functional language. It contains a variety of functions such as 'unique', 'join' and 'exists'. The lists of entities and relationships are addressed using aliases (e.g.

'SSE' is used for the list of entities of the SS). More examples of the internal representations of the constraints can be found in the following sections.

In this section, we describe the process of acquiring constraints and then present examples of syntactic and semantic constraints. Finally, we discuss how the constraints are used to diagnose students' solutions.

Knowledge acquisition

It is well known that knowledge acquisition is a very slow, labour intensive and time consuming process. Anderson's group have estimated that ten hours or more is required to produce a production rule (Anderson et al., 1996). Although there is no clear-cut procedure that can be followed to identify constraints, this section discusses the paths that were explored in discovering the constraints of KERMIT's knowledge base.

Most syntactic constraints of KERMIT were formulated by analysing the target domain of ER modelling through the literature (Elmasri & Navathe 2003). Due to the nature of the domain, the acquisition of syntactic constraints was not straightforward. Since ER modelling is an ill- defined domain, descriptions of its syntax in textbooks are informal. This process was conducted as an iterative exercise in which the syntax outline was refined by adding new constraints.

Semantic constraints are harder to formulate. We analysed sample ER diagrams and compared them against their problem specifications to formulate basic semantic constraints.

id = 10

relCond = "t"

satCond = "unique (join (SSE, SSR))"

Fedback1 = "Check the names of your entities and relationships. They must be unique."

Feedbck2 = "The name of <viol> is not unique. All entity and relationship names must be unique."

Feedback3 = "The names of <viol> are not unique. All entity and relationship names must be unique."

construct = "entRel"

conceptID = 1

Fig. 9. Constraint 10

(19)

Syntactic constraints

The syntactic constraints describe the syntactically valid ER schemas and are used to identify syntax errors in students' solutions. These constraints only deal with the student's solution. They vary from simple constraints such as "an entity name should be in upper case", to more complex constraints such as "the participation of a weak entity in the identifying relationship should be total".

Constraint 10, presented in Figure 9, is a syntactic constraint. It specifies that all names of entities and relationships should be unique. The relevance condition (relCond) of this constraint is set to true (denoted by t) and is always satisfied, meaning that this constraint is relevant to all student solutions. Its satisfaction condition checks that all names the student has assigned to entities and relationships are unique. In addition to the relevance and satisfaction conditions, each constraint contains three messages (feedback, feedback1, feedback2) that are used to generate feedback when the student violates the constraint. The first message is general, and is used to give hints to students. The other two messages are used as templates for generating detailed feedback messages. During the generation of feedback, the <viol> tag embedded in the message is replaced with the names of the constructs that have violated the constraint. Feedback1 is singular and is used for situations where a single construct has violated the constraint, whereas feedback2 is plural and is used for cases where many constructs of the solution have violated the constraint.

The types of constructs that violate the constraint are given as the construct attribute. In this example, the type of violated constructs can be either an entity or a relationship (denoted by entRel). The construct attribute is used in generating a very general feedback message that specifies the type of constructs that contain errors.

Table 2

Concepts covered by constraints ID Concept

0 Syntax and notation 1 Regular entity types 2 Weak entity types

3 Isolated regular entity types 4 Regular relationship types 5 Identifying relationship types 6 n-ary regular relationship types 7 Simple attributes

8 Key attributes 9 Partial key attributes 10 Derived attributes 11 Multivalued attributes 12 Composite attributes

13 Multivalued composite attributes

The constraints are organized into fourteen concepts, given in Table 2. The concepts cover different types of objects, their different arrangements and notation and syntax rules. The concepts are used for problem selection. When selecting a problem for the student, the system

(20)

decided on the concept first, and then selects a problem for the chosen concept. Each constraint is assigned to only one concept. The specific concept that the constraint deals with is specified as the conceptID of the constraint. Constraint 10 belongs to the concept with the identifier of 1, which is 'regular entities'.

Semantic constraints

Semantic constraints compare the student's solution to the ideal one. These constraints are usually more complex than syntactic constraints. Constraint 67 (Figure 10) deals with composite multivalued attributes, which are equivalent to weak entities. A weak entity type is used to model a set of entities that do not have an identifying attribute. Instead, such entities can be identified through their relationship with a regular entity type. For example, in the company database, the employee entity type is a regular one, as each employee can be identified by his/her (unique) employee number. The same database stores information about dependents of employees, but there are no unique attributes defined for them. Each dependent can be uniquely identified by his/her name, and the ID of the employee who supports the dependent.

id = 67

relCond = "each obj ISE

(and (notNull (matchSS (obj)))

(and (= type (obj), type (matchSS (obj)))

(> (countOfType (attributes (obj), mComp), 0)))))"

satCond = "each obj RELVNT

(each att (ofType (attributes (obj)), mComp)

(or (and (notNull (matchAtt (att, (matchSS (obj))))) (= (type (matchAtt (att, (matchSS (obj)))) mComp)) (and (and (notNull (matchSS (att))) (= (type att) w))

(belongs (matchSS (att), obj)))))"

Feedback1 = "Check whether your entities have all the required multivalued composite attributes.

Your entities are missing some multivalued composite attributes. You can represent composite multivalued attributes as weak entities as well."

Feedback2 = "The <viol> entity is missing some composite multivalued attributes. You can represent composite multivalued attributes as weak entities as well."

Feedback3 = "<viol> entities are missing some composite multivalued attributes. You can represent composite multivalued attributes as weak entities as well."

construct = "ent"

conceptID = 13

Fig. 10. Constraint 67

As stated above, composite multivalued attributes can be modelled alternatively as weak entities. If the ideal solution contains a composite multivalued attribute, constraint 67 compares it to a composite multivalued attribute in the student's solution, or to a weak entity. The relevant objects include IS entities, which possess a multivalued composite attribute, for which there is a corresponding entity in the SS. The constraint is satisfied if all the corresponding entities in the SS have a matching multivalued composite attribute (of type mComp) or a matching weak entity.

This constraint illustrates the ability of the system to deal with alternative correct student

(21)

solutions that are different from the IS specified by a human expert. KERMIT knows about equivalent ways of solving problems, and it is this feature of the knowledge base that gives KERMIT considerable flexibility.

If constraint 67 is violated, the student will get the general feedback message (feedback1) first. If the student needs more help to correct the error, a more detailed message (feedback2 or feedback3, depending on the number of constructs that violate the constraint) will be given.

Please note that the <viol> variable will be replaced by the name of the entity that violates constraint 67, thus providing a reference to the student.

The equivalent solutions identified by Constraint 67 are illustrated in Figure 11. The entity E1, in the schema labelled (a), is the owner of the weak entity W1. According to the cardinality of the entity E1 in the identifying relationship I1, E1 may own a number of W1 weak entities.

Schema (a) is equivalent to Schema (b), where W1 is represented as a multivalued attribute of E1.

All attributes of the weak entity W1 (e.g. A1) are represented as components of the multivalued attribute W1. In this scenario, even though the same database can be modelled in two different ways, KERMIT is only given one schema as its ideal solution. KERMIT's constraints are able to identify that the other schema is also an equivalent representation of the solution.

E1

I1 E1

W1

n 1

(a) (b)

A1

W1 A1

Fig. 11. A weak entity can also be represented as a multivalued attribute

The ability to identify alternative correct solutions is very important, especially in domains where problem-solving strategies do not exist and therefore a problem solver is not available. In the ER domain, the differences between alternative solutions are small, as illustrated on the example of constraint 67. However, in other domains, these differences may be greater. In our previous work (Mitrovic & Ohlsson, 1999), SQL-Tutor was also able to recognise alternatives to the pre-specified correct solution, as constraints checked whether the student included all the necessary elements of the solution in any allowable form. In the SQL domain, the differences between alternative queries may be substantial. There may be many constraints that are necessary to ensure that two solutions are equivalent. In domains where there are many alternative approaches to solving problems, constraints will be more complex, and the process of generating the constraint set will be more demanding. However, the ability to recognize alternative solutions only depends on the correctness and the completeness of the constraint base. If the constraint set is incomplete/incorrect, the system may not be able to identify equivalent ways of solving the same problem, and in such cases the student's solution would be rejected as incorrect. As the knowledge base of a good quality is a normal requirement for any knowledge-based system, this is not a limitation of the approach used.

(22)

Student modeller

KERMIT maintains two kinds of student models: short-term and long-term ones. Short-term models are generated by matching student solutions to the knowledge base and the ideal solutions. The student modeller iterates through each constraint in the constraint base, evaluating each of them individually. For each constraint, the modeller initially checks whether the current problem state satisfies its relevance condition. If that is the case, the satisfaction component of the constraint is also verified against the current problem state. Violating the satisfaction condition of a relevant constraint signals an error in the student's solution.

The short-term student model consists of the relevance and violation details of each constraint, discovered during the evaluation of the problem state. The short-term model is only dependent on the current problem state and does not account for the history of the constraints such as whether a particular constraint was satisfied during the student's last attempt. The pedagogical module uses the short-term student model to generate feedback to the student.

The long-term student model of KERMIT is implemented as an overlay model. In contrast to the short-term model, the long-term model keeps a record of each constraint's history. It records information on how often the constraint was relevant for the student's solution and how often it was satisfied or violated. The pedagogical module uses this data to select new problems for the student. The long-term student model is saved in a file when the student logs out.

Pedagogical module

The pedagogical module (PM) is the driving engine of the whole system. Its main tasks are to generate appropriate feedback messages for the student and to select new practice problems.

KERMITindividualises both these actions to each student based on their student model. Unlike ITSs that use model tracing, KERMIT does not follow each student's solution step-by-step. It only evaluates the student's solution once it is submitted. During evaluation, the student modeller identifies the constraints that the student has violated.

Feedback generation

The feedback from the system is grouped into six levels according to the amount of detail:

correct, error flag, hint, detailed hint, all errors and solution. The first level of feedback, correct, simply indicates whether the submitted solution is correct or incorrect. The error flag indicates the type of construct (e.g. entity, relationship, etc.) that contains the error. Hint and detailed hint offer a feedback message generated from the first violated constraint. Hint is a general message such as "There are attributes that do not belong to any entity or relationship". On the other hand, detailed hint provides a more specific message such as "The 'Address' attribute does not belong to any entity or relationship", where the details of the erroneous object are given. Not all detailed hint messages give the details of the construct in question, since giving details on missing constructs would give away solutions. A list of feedback messages on all violated constraints is displayed at the all errors level. The ER schema of the complete solution is displayed at the final level (solution level).

Initially, when the student begins to work on a problem, the feedback level is set to the correct level. As a result, the first time a solution is submitted, a simple message indicating

(23)

whether or not the solution is correct is given. This initial level of feedback is deliberately low, as to encourage students to solve the problem by themselves. The level of feedback is incremented with each submission until the feedback level reaches the detailed hint level. In other words, if the student submits the solutions four times the feedback level would reach the detailed hint level, thus incrementally providing more detailed messages. The system was designed to behave in this manner to reduce any frustrations caused by not knowing how to compile the correct ER model. Automatically incrementing the levels of feedback is terminated at the detailed hint level to encourage to the student to concentrate on one error at a time rather than all the errors in the solution. Moreover, if the system automatically displays the solution to the student on the sixth attempt, it would discourage them from attempting to solve the problem at all, and may even lead to frustration. The system also gives the student the freedom to manually select any level of feedback according to their needs. This provides a better feeling of control over the system, which may have a positive effect on their perception of the system.

In the case when there are several violated constraints, and the level of feedback is different from 'all errors', the system will generate the feedback on the first violated constraint. The constraints are ordered in the knowledge base by the human teacher, and that order determines the order in which feedback would be given.

Problem selection

KERMIT examines the long-term student model to select a new practice problem for the student.

In selecting a new problem from the available problems, the system first decides what concept is appropriate for the student on the basis of the student model. The concept that contains the greatest number of violated constraints is targeted. We have chosen this simple problem selection strategy in order to ensure that students get the most practice on the concepts with which they experience difficulties. In situations where there is no obvious "best'' concept (i.e. a prominent group of constraints to be targeted), the next problem in the list of available problems, ordered according to increasing complexity, is given.

Authoring tool for adding new problems

The ideal solutions are represented internally in an encoded textual representation, as discussed previously. Similarly the problem text is also stored internally with embedded tags. Therefore, adding new problems and their solutions to the teaching system requires extensive knowledge of their internal representations. In order to ease the task of adding new problems, an authoring tool that administers the process of inserting new problems and automatically converting the problems and solutions to their internal representations was developed.

The authoring tool offers a number of benefits. It eliminates the burden of having to learn the complicated grammar used to represent the ideal solutions internally. As a consequence, teachers and other database professionals can add new problems and their ideal solutions to KERMIT easily. This feature makes it possible for the system to evolve without the need for programming resources. Furthermore, it makes it possible for teachers to customise the system by modifying the problem database to consist of problems that they select. As a result, database teachers would have better control over the subject material presented to the students.

(24)

The process of adding new problems using the authoring tool consists of three phases. The author needs to enter the problem text, then draw the ER diagram and finally specify the correspondences between the words of the problem text and the objects in the ideal solution.

Initially, the user is given a text box in which to insert the problem text. At the completion of this phase, the user is presented with an interface, similar to the problem-solving interface presented to students, in which they can construct the ideal solution to the problem. Once the user completes the ideal solution to the problem using the ER modelling workspace, the authoring tool generates an image (in GIF format) of the ideal solution and saves it in the solutions directory to be used for the complete solution feedback level. The final phase involves the human teacher specifying the positions of the tags that need to be embedded in the problem text. The authoring tool automatically generates a unique ID for each construct in the solution and iteratively goes through each construct prompting the user to select the words in the problem text that correspond to the construct. It is up to the human teacher to make sure that he or she has specified all the relevant words of the problem text that correspond to a particular construct. Lastly, the tool adds the new problem with its embedded tags and its ideal solution converted to its internal representation.

EVALUATION

As the credibility of an ITS can only be gained by proving its effectiveness in a classroom environment or with typical students, we have conducted two evaluation studies on KERMIT,

described in this section.

Pilot Study

The pilot study was conducted to evaluate the effectiveness of KERMIT and its contribution to learning. This study involved two versions of the system. One group of students used the full version, which generates the student model and offers various levels of feedback. The other group worked with ER-Tutor, a cut-down version of the system. We wanted to have a version of the system that would be similar to a classroom situation, where students do not get individual feedback. However, we also wanted all students to work on computers. Therefore, ER-Tutor contained the same problems and the drawing tool as KERMIT, but neither evaluated students' solutions nor offered individualized feedback. The only level of feedback available to students in ER-Tutor was the complete solution. The interfaces of both systems were similar, but with the option of selecting feedback and the feedback textbox missing from the ER-Tutor. Therefore, in the pilot study we were interested to see the effects of the individualization on students' learning.

The pilot study also assessed the students' perception of the two systems.

There were other differences between the two systems besides the student model and feedback options. When a new construct is created in KERMIT, the system requires the student to name it by highlighting a phrase from the problem text. In ER-Tutor the student is not required to do that, and can type any name. Furthermore, there is a tutorial in KERMIT, which is shown at the beginning of the session, explaining the various features of the system, including highlighting of the problem text. Students using ER-Tutor were not shown this tutorial, and therefore had more time for interaction. The students who were using KERMIT were required to complete the

(25)

current problem before moving on to the next one, whereas ER-Tutor allowed students to skip problems as they pleased.

The pilot study took place at Victoria University, Wellington (VUW). Twenty eight volunteers from students enrolled in the Database Systems course (COMP302) offered by the School of Mathematical and Computer Sciences at VUW participated. The course is offered as a third year computer science paper, which teaches ER modelling as defined by Elmasri and Navathe (2003). The students who participated in the pilot study had previously learnt database modelling in the lectures and labs of the course.

Procedure

The study involved two streams of one-hour sessions. The participants were randomly allocated to the groups. Although the study was originally planned for a two-hour session, each group was only given only one hour due to resource shortages at VUW. Initially each student was given a document that contained a brief description of the study and a consent form. The students sat a pre-test and then interacted with the system. Finally, the participants were given a post-test and a questionnaire. Students were asked to stop interacting with the system after approximately 45 minutes into the study, as the study had to be concluded within an hour.

A pre- and post-test (given in Appendix A) were used to evaluate the students' knowledge before and after interacting with the system. To minimise any prior learning effects, we designed two tests (A and B) of approximately the same complexity. They contained two questions: a multiple choice question to choose the ER schema that correctly depicted the given scenario and a question asking the students to design a database that satisfied the given set of requirements.

The tests were short because of the short duration of the study; if more questions were given in the tests, the students would have no time to interact with the system. In order to reduce any bias on either test A or B, the first half of each group was given test A as the pre-test and the remainder were given B as the pre-test. The students who had test A as their pre-test were given test B as their post-test and vice versa.

All the participants interacted with either KERMIT or ER-Tutor, composing ER diagrams that satisfied the given set of requirements. They worked individually, solving problems at their own pace. The set of problems and the order in which they were presented was identical for both groups. A total of six problems were ordered in increasing complexity.

The system assessment questionnaire (given in Appendix C) recorded the student's perception of the system. The questionnaire contained fourteen questions. Initially students were questioned on previous experience in ER modelling and in using CASE tools. Most questions asked the participants to rank their perception on various issues on a Likert scale with five responses ranging from very good (5) to very poor (1), and included the amount they learnt about ER modelling by interacting with the system and the enjoyment experienced. The students were also allowed to give free-form responses. Finally, suggestions were requested on enhancement of the system.

Learning

All the important events such as logging in, submitting a solution and requesting help that occurred during an interaction session with KERMIT are recorded in a log specific to each

Références

Documents relatifs

The BORM (Business Object Relation Modelling) method is in continuous development since 1993 when it originally was intended as an instrument to provide support for

It also confirmed its perception that ‘indirectly discriminatory national legislation restricting the grant to frontier workers of social advantages (…) where there is not

The authors have introduced a novel conversational intelligent tutoring system (CITS), named Hendrix 2.0, which is capable of modelling comprehension in real-time using

SQLTOR proposes to simplify work for authoring a tutoring course, it also includes several tools and features of tutoring tasks clustering, AST comparison algorithms of user and

To this end, we make two assumptions: (1) students tend to be engaged at the beginning of a lesson when answering the first few questions and (2) if a student correctly answered

Analogically to system F I1 , coordinate system F I2 – free in azimuth, the origin point corresponds to the frame and carrier junction point, z axis corresponds to the

Relevant knowledge for the participants Project management methods Business process and organizational modelling, experiences in a business domain Technical infrastructure of

To develop a team training testbed, the collaborative team task of joint reconnais- sance (&#34;recon&#34;) was chosen based on its ability to test various dimensions of