• Aucun résultat trouvé

Evaluation Scenario

Dans le document Intelligent Agents for Data Mining and (Page 53-61)

The evaluation scenario consists of a knowledge mart where three agents are producing knowledge in the representation of a docent coordinator (C1) and two instructors (I1 and I2). The goal of agents in the mart is the development

of a IMS/SCORM (IMS, 2001) learning object — a course named ‘XML Programming’ — that fulfills a set of educational objectives. Although there is room in IMS standards for describing each part of the learning object — e.g., organizations, resources, metadata, etc. — we will restrict the discussion to the ToC structure, devoting the interaction process n to it.

When authors submit proposals, they will include the differences between both ToCs and will refer to the same interaction. The interaction protocol is executed by every receiving author until the proposal is eventually accepted or replaced with a further elaborated proposal. This process continues until some proposal wins all evaluations, an agreement is reached, or until some degree of consensus is achieved (depending on the kind of interaction, i.e., competitive, negotiating or cooperative). Although the authors’ behavior is an asynchronous process, the agents’ interaction protocol helps to synchronize their operations.

Objectives

These are the educational objectives that define the preference relation used to evaluate proposals about the ToC of the course:

1. Ability to program XHTML (i.e., XML-generated HTML) web applica-tions

2. Ability to program server-side web applications 3. Ability to program XML data exchange applications

The degree of fulfillment of educational objectives is modeled as a three-component vector x = (x1,x2,x3), with xk ∈ I=[0,1] for k=1,2,3. Let f: I3→ I be a numerical measure of how well a proposal meets the objectives.

Evaluation Criteria

The relevance of a proposal is graded by the fulfillment of the educational objectives described above. All objectives being equally satisfied, the rank of the agent will decide (coordinator is higher that instructor). If ranks are the same, the time when the proposal was issued will decide. To determine the instant of generation, every proposal will include a time-stamp.

Each proposal p is described by a three-component vector (p1, p2, p3), where:

p1=f(x) measures the degree of fulfillment of educational objectives.

p2 is the numerical rank held by the submitter agent.

p3 is a time-stamp.

Notation for Proposals

To simplify the notation, proposals are represented by xij, where xi is an identification of the author, and j is a sequence number ordered by the instant of generation of the proposal. Using this notation, the following proposals will be elaborated by agents in the mart:

i11: Create a unique chapter for “XML script programming.”

i12: Divide “XML script programming” into two new chapters: “Client-side XML script programming” and “Server-“Client-side XML script program-ming.”

i21: Add a chapter about “Document Type Definitions (DTD).”

i22: Add a chapter about “DTD and XML schemas.”

c11: Add a chapter about “Using XML as data.”

Preference Relationship

A preference relationship > is defined between any two proposals p = (p1, p2, p3) and q = (q1, q2, q3):

p > q ⇔ (p1 > q2) ∨ [(p1=q1) ∧ (p2 > q2)] ∨ [(p1=q1) ∧ (p2=q2) ∧ (p3 > q3)] (1) The preference relation given in (1) defines a partial order, where i11 < i12

< i21 < i22 < c11, in accordance with the evaluation criteria.

Sequence of Events

The sequence of events generated by agents in the mart is made up of three acts, which are depicted in Figure 2 and traced as follows:

(1) I1 starts by sending a proposal i11. When i11 is about to be consolidated, I2 agent will issue a better evaluated proposal i21. C1 does nothing and silently accepts every proposal that comes to it.

(2) I1 and I2 elaborate two respective proposals i12 and i22, approximately at the same time during the distribution phase. The proposal from I2 has a better evaluation than I1’s, and both are better than those in the first act.

(3) C1 builds and sends the best-evaluated proposal of this scene, which will eventually win the evaluation.

The series of messages exchanged during act (2) is depicted in more detail in Figure 3 and is described as follows:

(a) Both I1 and I2 receive each other’s proposal and begin the distribution phase, therefore starting timeout t0. Proposals i12 and i22 also arrive at C1, which is not participating in the process and silently receives them.

(b) I1 compares i22 to i12, turning out that its proposal has a worse evaluation.

It is reasonable that an evaluation of proposal i12 obtains a higher value than i22, as for the second objective described above. Concerning the first and third objectives, any relevance function should result in similar values for both proposals, so they would not be decisive. Then, I1 starts timeout t1, giving i22 a chance to be consolidated. On the other hand, I2 also compares both proposals and reminds I1 of the results by again sending i22, then extending timeout t0 in order to give a chance for other agents’

proposals to come.

(c) When timeout t0 expires, I2 sends a consolidation message for i22 that arrives to every agent in the mart. At the reception, I1 finishes the protocol because it is expecting the consolidation for i22. C1 simply accepts the notification.

(d) Finally, at the expiration of t1, I2 is notified about the end of the consolidation phase for i22, and its execution of the protocol finishes successfully. Therefore, every agent in the mart will eventually know about the consolidation of the proposal.

These tests have been carried out to examine how far the multi-agent architecture facilitates the coordination of a group of actors that are producing learning objects. In this educational scenario, quantitative measurements of performance and throughput were taken concerning observable actions and behaviors about a number of aspects, such as conflict-solving facility, effective-ness in coordinating the production process, fair participation of members from other groups, overall quality of the final results, and speed and quality of the generated knowledge.

In this context, the agent-mediated solution has been found to facilitate the following aspects of the distributed creation of learning objects during the instructional design process:

• Bring together instructional designers’ different paces of creation.

• Take advantage of designers’ different skills in the overall domain and tools that are managed.

• Reduce the number of conflicts provoked by interdependencies between different parts of the learning objects.

• In a general sense, avoid duplication of effort.

Figure 2. Sequence of Events for the Learning Object Evaluation Scenario

Agent I1

Agent I1

Agent I2 Agent C1

proposal(i11)

consolidate(i11)

consolidate(i21) consolidate(i21) proposal(i21) proposal(i21)

Distribution

Consolidation

Consolidation Distribution

Consolidation

Agent I2 Agent C1

proposal(i12) Distribution

proposal(i22) proposal(i22)

Distribution Consolidation

consolidate(i22) consolidate(i22)

Consolidation

Agent I1 Agent I2 Agent C1

proposal(c11) Distribution

Consolidation consolidate(c11)

act I

act II

act III

CONCLUSION

This work presents a participative multi-agent architecture to develop knowledge production systems. Multi-agent interaction approaches and pro-tocols are designed according to top-down or bottom-up approaches. The architecture presented in this chapter is a bottom-up approach to the design of collaborative multi-agent systems, where every mart holds responsibilities on some domain-level knowledge, while coordination-level knowledge interfaces to other domains are well-defined. This structuring of knowledge marts can help to reduce inconsistencies between agent territories.

The participative approach presented in this work has been successfully applied to the development of learning objects, but is also applicable to other knowledge production tasks (Dodero et al., 2002). Results obtained from single-mart and two-mart evaluation scenarios have been contrasted, with the result that the coordination protocol improves conflict-solving and coordina-tion during the shared development process. Moreover, the absence of participation of some agent does not delay the overall process. Nevertheless, in order to test the multilevel architecture, these results need to be confirmed Figure 3. Execution Example of the Interaction Protocol

I1 I2

(a) Initial exchange of proposals (b) Actions after receive and evaluate proposals

(c) Consolidation after t0 expiration (d) Protocol finishes after t1 expiration Proposal i22

in more complex scenarios, consisting of two or more groups of participative agents working in different knowledge marts. We are also conducting tests of the impact of the number of agents on the overall effectiveness of the model.

Further validation is also needed to assess the usefulness of the approach in different application scenarios, such as software development, especially in the analysis and design stages.

The structuring of heterogeneous knowledge domains into marts presents a number of issues: What would happen if an agent changes the kind of knowledge that it is producing, and is this better classified in another mart? As time progresses, will knowledge that is being produced in a mart be biased towards a different category? As a future work, it seems reasonable to dynamically establish the membership of agents into the marts, such that an agent can change its membership to some other mart if the knowledge produced by the agent affects interaction processes carried out in that mart. Then, division and/or fusion of marts may be needed to better reflect the knowledge-directed proposed structure. In that case, clustering techniques can be readily applied to solve those issues. As well, it will be helpful that mart generation and affiliation of agents to marts be dependent on agents’ ontology-based interests.

REFERENCES

Argyris, C. (1993). Knowledge for Action. San Francisco, CA: Jossey-Bass.

Clases, C. & Wehner, T. (2002). Steps across the border - Cooperation, knowledge production and systems design. Computer Supported Coop-erative Work, 11, 39-54.

Davenport, T. H. & Prusak, L. (1998). Working Knowledge: How Organi-zations Manage What They Know. Boston, MA: Harvard Business School Press.

Delgado, M., Herrera, F., Herrera-Viedma, E., & Martínez, L. (1998).

Combining numerical and linguistic information in group decision making.

Information Sciences, 7, 177-194.

Dodero, J. M., Aedo, I., & Díaz, P. (2002). Participative knowledge produc-tion of learning objects for e-books. The Electronic Library, 20, 296-305.

Etzioni, O. & Weld, D. S. (1995). Intelligent agents on the Internet: Fact, fiction, and forecast. IEEE Expert, 10, 44-49.

Foundation for Intelligent Physical Agents (FIPA). (1997). Specification Part 2 - Agent Communication Language. Geneva, Switzerland: FIPA.

(Technical Report)

Genesereth, M. & Tenenbaum, J. (1991). An agent-based approach to software. Stanford, CA: Stanford University Logic Group. (Technical Report)

Goldman, A. H. (1991). Empirical Knowledge. Berkeley, CA: University of California.

Gruber, T. R. (1991). The role of common ontology in achieving sharable, reusable knowledge bases. In J. Allen, R. Fikes & E. Sandewall (Eds.), Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning, San Mateo, California (pp.

601-602). San Francisco, CA: Morgan Kaufmann.

IMS Global Learning Consortium. (2001). IMS content packaging informa-tion model, version 1.1.2. Burlington, MA: IMS Global Learning Consortium. (Technical Report)

Jennings, N. R. & Campos, J. R. (1997). Towards a social level characterisation of socially responsible agents. IEEE Proceedings on Software Engi-neering, 144, 11-25.

Koper, E. J. R. (1997). A method and tool for the design of educational material. Journal of Computer Assisted Learning, 14, 19-30.

Maes, P. (1994). Agents that reduce work and information overload. Commu-nications of the ACM, 37, 31-40.

Mayfield, J., Labrou, Y. & Finin, T. (1995). Desiderata for agent communica-tion languages. In AAAI Spring Symposium on Informacommunica-tion Gathering (pp. 123-130).

Merrill, M. D. (1994). Instructional Design Theory. Englewood Cliffs, NJ:

Educational Technology Publications.

Smith, R. G. & Davis, R. (1981). Frameworks for cooperation in distributed problem solving. IEEE Transactions on Systems, Man, and Cybernet-ics, 11, 61-70.

Stewart, T. A. (1997). Intellectual Capital: The New Wealth of Organiza-tions. New York: Doubleday.

Sycara, K. (1990). Persuasive argumentation in negotiation. Theory and Decision, 28, 203-242.

Chapter IV

Customized

Recommendation

Dans le document Intelligent Agents for Data Mining and (Page 53-61)