• Aucun résultat trouvé

Unawareness in multi-agent systems with partial valuations

N/A
N/A
Protected

Academic year: 2021

Partager "Unawareness in multi-agent systems with partial valuations"

Copied!
4
0
0

Texte intégral

(1)

HAL Id: hal-02984952

https://hal.archives-ouvertes.fr/hal-02984952

Submitted on 1 Nov 2020

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Unawareness in multi-agent systems with partial

valuations

Line van den Berg, Manuel Atencia, Jérôme Euzenat

To cite this version:

Line van den Berg, Manuel Atencia, Jérôme Euzenat. Unawareness in multi-agent systems with partial

valuations. LAMAS 2020 - 10th AAMAS workshop on Logical Aspects of Multi-Agent Systems, May

2020, Auckland, New Zealand. �hal-02984952�

(2)

Unawareness in Multi-Agent Systems with Partial Valuations

Line van den Berg, Manuel Atencia, Jérôme Euzenat

{line.van-den-berg,manuel.atencia,jerome.euzenat}@inria.fr

Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LIG, F-38000 Grenoble France

ABSTRACT

Public signature awareness is satisfied if agents are aware of the vocabulary, propositions, used by other agents to think and talk about the world. However, assuming that agents are fully aware of each other’s signatures prevents them to adapt their vocabularies to newly gained information, from the environment or learned through agent communication. Therefore this is not realistic for open multi-agent systems. We propose a novel way to model aware-ness with partial valuations that drops public signature awareaware-ness and can model agent signature unawareness, and we give a first view on definining the dynamics of raising and forgetting aware-ness on this framework.

KEYWORDS

Awareness; dynamic epistemic logic; partial valuations; multi-agent systems

1

INTRODUCTION

Agents use propositions to represent the information they have about the world. They may use different propositions and may not be aware of the propositions used by other agents, i.e. their signature, yet they may still need to communicate. In multi-agent modal logics and in particular Dynamic Epistemic Logic (DEL), all agents share the same signature. However, this is not desirable nor practical for open multi-agent systems because it prevents agents from acquiring new vocabulary or adapting their current signatures when learning new information from the environment or through agent communication.

This problem lies at the core of DEL: dynamic upgrades shrink or re-arrange the models so that the carried information becomes knowledge or belief in the resulting model. But this requires agents to already be aware of the possible future evolutions of their knowl-edge and beliefs and are not able to adapt their signatures.

We propose a novel way to model agent awareness with par-tial valuations that (i) allows agents to be unaware of other agents’ signature and that (ii) enables knowledge representations to dynam-ically evolve. This enables us to drop public signature awareness and raise awareness of agents when they acquire new vocabulary.

2

RELATED WORK

Partial valuations have already been introduced for (Dynamic) Epis-temic Logic [3–5, 7], but not connected to (dynamic) agent aware-ness. However, we are not the first to capture unawareness and awareness of agents. In [1], epistemic logic is extended with an operator 𝐴𝜙 to denote “awareness of 𝜙” and a complete dynamic logic with upgrades for increasing and decreasing agent awareness was developed in [6, 8–10]. In this approach, each proposition is evaluated at each world and only awareness is defined as a par-tial function. That is, all the propositions that agents may become

aware of in the future are already specified in the initial setting. As a consequence, increasing agent awareness also uncovers the underlying truth values. Awareness is then used to distinguish between ‘implicit’ and ‘explicit’ knowledge [8].

In this paper, we propose a different viewpoint and consider becoming aware of a proposition and becoming aware of its truth value as two different acts. This enables models to evolve openly in their entirety.

3

UNAWARENESS

With partial semantics, lack of truth and falsity are not the same. This enables agents to be uncertain about a statement 𝑝, i.e. not knowing whether it is true or false (in the figure below on the left), like in the case with standard semantics, but also to be unaware of it, i.e. not considering it (in the figure below on the right). An agent is unaware of 𝑝 if 𝑝 is not evaluated at the worlds the agent considers plausible, where plausibility from a world 𝑤 to a world 𝑣for agent 𝑎 is defined as follows: 1) there is an arrow from 𝑤 to 𝑣for 𝑎 (𝑤𝑅𝑎𝑣), and 2) there is a (reflexive) arrow from 𝑣 to 𝑣 for 𝑎 (𝑣𝑅𝑎𝑣). 𝑝 𝑤 𝑝 𝑣 𝑎 𝑎 𝑎 𝑢 𝑎

To allow agents to have different knowledge representations about the world, and to be unaware of each others signatures, there is only a ‘weak reflexivity’ requirement: 𝑤𝑅𝑎𝑤and 𝑤𝑅𝑎𝑣implies 𝑣 𝑅𝑎𝑣. Reflexivity and the lack of reflexivity allow us to control what agents are aware of and therefore can have knowledge (or beliefs) about. For example, consider two agents 𝑎 and 𝑏 that represent the world with the propositions 𝑝 and 𝑞, respectively, that they each know but that the other agent is unaware of - and therefore cannot know or believe anything about. The states of the agents are described as follows, where from 𝑤𝑎(𝑤𝑏) agent 𝑏 (𝑎) does not have a (reflexive) arrow to 𝑤𝑎(𝑤𝑏) but instead only to another world 𝑣𝑎(𝑣𝑏) where 𝑝 (𝑞) is undefined:

𝑝 𝑤𝑎 𝑣𝑎 𝑎 𝑎, 𝑏 𝑏 𝑞 𝑤 𝑏 𝑣𝑏 𝑏 𝑎, 𝑏 𝑎

We model the knowledge and beliefs of agents from an agent-perspective, where each agent can use a different signature, or vocabulary. Thus, instead of one actual world as with standard semantics for DEL, agents have different ways to represent the actual world: these are reflections of the actual world, representing the actual world as the agent sees it.

We require that the reflections are consistent. More specifically, that for each agent, there is a reflection that is consistent with a

(3)

reflection of each other agent. In the example above, the reflections are 𝑤𝑎and 𝑤𝑏for agent 𝑎 and 𝑏, respectively, and they are indeed consistent: 𝑝 and 𝑞 do not contradict each other.

This enables models to be truly open: even the reflections of the actual world are not constrained to interpret the same propositions.

3.1

Properties of awareness

We require that awareness cannot be lost over the relations 𝑅𝑎, but is preserved. Similar properties for awareness were already motivated in [1, 2]. In [1], awareness is assumed to only increase over time and in [2] awareness is considered constant for all the worlds the agent has access to.

In our semantics, preserving agent awareness over the relations 𝑅𝑎comes two-fold:

whenever an agent 𝑎 has a (reflexive) relation from 𝑤 to 𝑤, she also has a (reflexive) relation from 𝑣 to 𝑣 for any 𝑣 such that 𝑤𝑅𝑎𝑣(weak reflexivity);

and the propositions that are evaluated (defined) at𝑤, remain evaluated at any 𝑣 such that 𝑤𝑅𝑎𝑣.

The latter property is specified as follows:

the evaluated propositions cannot increase over 𝑅𝑎 (specifi-cation);

and any two worlds that can be reached from a world 𝑤 by the same agent via 𝑅𝑎, share the same evaluated propositions (consideration consistency).

Together, the requirements of awareness enforce that agents are consistent in their considerations: if an agent 𝑎 considers a propo-sition 𝑝 or its negation plausible at a world 𝑤 , she considers 𝑝 or its negation plausible at every world she can reach via 𝑅𝑎from 𝑤 . Definition 3.1 (Properties of awareness). Let 𝑊 be a set of states, 𝑎 be an agent with a relation 𝑅𝑎„ 𝑊 𝑊 , and𝑉 a valuation function that assigns to each state a partial function𝑉𝑤: PÑ t0, 1u. Then

the properties of awareness are formalized as:

Weak reflexivity: @𝑤, 𝑣 P 𝑊 : 𝑤𝑅𝑎𝑤^ 𝑤𝑅𝑎𝑣ñ 𝑣𝑅𝑎𝑣 Specification: @𝑤, 𝑣 P 𝑊 : 𝑤𝑅𝑎𝑣ñ 𝐷𝑜𝑚p

𝑉𝑣q „ 𝐷𝑜𝑚p𝑉𝑤q

Consideration consistency: @𝑤, 𝑣,𝑢 P 𝑊 : 𝑤𝑅𝑎𝑣^ 𝑤𝑅𝑎𝑢 ñ 𝐷𝑜𝑚p𝑉𝑣q  𝐷𝑜𝑚p𝑉𝑢q

where the set of evaluated propositions at world 𝑤 , the domain (𝐷𝑜𝑚p𝑉𝑤q), is defined as 𝐷𝑜𝑚p𝑉𝑤q  t𝑝 P P | 𝑝𝑉𝑤 P t0, 1uu.

3.2

Semantics

The semantics that we use are different from the semantics of Partial (Dynamic) Epistemic Logic in [4] in two ways:

knowledge and belief are defined as truth in all accessible and all most plausible worlds, respectively, in which reflexivity is satisfied;

and formulas 𝜙 are only true (or false) whenever all proposi-tions occurring in 𝜙 are defined.

The first condition shapes our epistemic (𝑎) and doxastic (Ñ𝑎) relations via 𝑅𝑎: 𝑤 𝑎 𝑣 iff 𝑣𝑅𝑎𝑣 and either 𝑤𝑅𝑎𝑣 or 𝑣𝑅𝑎𝑤, and 𝑤 Ñ𝑎 𝑣 iff 𝑣 P 𝑀𝑎𝑥𝑅𝑎t𝑢 | 𝑤𝑅𝑎𝑢^ 𝑢𝑅𝑎𝑢u. Requiring reflexivity

enables us to control that agents can only know or believe a propo-sition if they are aware of it.

The second condition strenghtens this: it ensures that agents can only know (or believe) a formula if they have full awareness of the

propositions that occur in it. For example, unlike the work in [4], this means that an agent 𝑎 can only know (or believe) a disjunction, i.e. 𝐾𝑎p𝑝 _ 𝑞q, if she is aware of both disjuncts 𝑝 and 𝑞.

3.3

Raising awareness

Traditionally, dynamic upgrades for DEL reduce or re-organize the possible worlds and, with this, increase the knowledge and beliefs of agents. With a formal notion of awareness, we can additionally extend (or decrease) the valuation function to raise (or forget) agent awareness, both locally or globally. This allows agents to naturally extend their vocabularies, and hence knowledge and beliefs, with newly gained information.

Formally, to raise awareness of 𝑝 ( 𝑝), all the worlds (globally), or all accessible worlds for an agent (locally), in which 𝑝 was initially not defined are duplicated, accessibility to and from duplicated worlds being preserved, and 𝑝 is made true in one world and false in the other, while preserving the relations. This means that unaware agents (𝑝 is not defined in their accessible worlds) are transformed to uncertain agents (considering 𝑝 true or 𝑝 false) after raising awareness.

3.4

Forgetting

A dual, inverse operator for forgetting awareness can similarly be defined. Naturally, to forget awareness of a proposition 𝑝 (𝑝) all valuations of 𝑝 are deleted from the model (globally), or from all ac-cessible worlds of an agent (locally), while preserving accessibility relations. After awareness of 𝑝 is raised and subsequently forgotten, i.e. M 𝑝;𝑝

, this way of forgetting forces us back to the original model M, up to bisimilarity. However, after a more complex up-grade sequence like 𝑝; !𝑝; !p𝑝 Ñ 𝑞q; 𝑝, where a proposition (𝑝) is used as evidence for another proposition (𝑞) before it is forgotten, we have a choice: to arrive back at the original state (and therefore forgetting the truth value learned of 𝑞), or to keep the conclusions and view forgetting as a generalization operator (abstracting from the evidence 𝑝).

4

DISCUSSION AND CONCLUSION

We have provided a first view on a new semantics for modeling agent unawareness using partial valuations. This semantics allows communicating agents to be unaware of the signatures of other agents and to raise their awareness when new information is ac-quired.

Besides its theoretical interest, this can be used to show that public signature awareness is reached in the limit of the raising awareness upgrade. The intuition behind this is that as long as agents share all the propositions in their signature, the other agents will raise their awareness accordingly.

Future research is required to formally explore the necessary conditions for successful communication without public signature awareness and to explore the practical implications of this seman-tics.

ACKNOWLEDGMENTS

The authors thank the anonymous reviewers for their valuable comments and helpful suggestions. This work has been partially supported by MIAI @ Grenoble Alpes (ANR-19-P3IA-0003).

(4)

REFERENCES

[1] Ronald Fagin and Joseph Y Halpern. 1987. Belief, awareness, and limited reason-ing. Artificial intelligence 34, 1 (1987), 39–76.

[2] Joseph Y Halpern. 2001. Alternative semantics for unawareness. Games and Economic Behavior 37, 2 (2001), 321–339.

[3] Jens Ulrik Hansen. 2014. Modeling truly dynamic epistemic scenarios in a partial version of DEL. The Logica Yearbook 2013 (2014), 63–75.

[4] Jan Jaspars and Elias Thijsse. 1996. Fundamentals of partial modal logic. Studies in Logic Language and Information (1996).

[5] Elias Thijsse. 1994. Partial logic and knowledge representation. (1994). [6] Johan Van Benthem and Fernando R Velázquez-Quesada. 2010. The dynamics of

awareness. Synthese 177, 1 (2010), 5–27.

[7] Wiebe Van der Hoek, Jan Jaspars, and Elias Thijsse. 1996. Honesty in partial logic. Studia Logica 56, 3 (1996), 323–360.

[8] Hans Van Ditmarsch and Tim French. 2009. Awareness and forgetting of facts and agents. In 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Vol. 3. IEEE, 478–483.

[9] Hans Van Ditmarsch and Tim French. 2011. Becoming aware of propositional variables. In Indian Conference on Logic and Its Applications. Springer, 204–218. [10] Hans Van Ditmarsch, Andreas Herzig, Jérôme Lang, and Pierre Marquis. 2009.

Références

Documents relatifs

Keywords: Multi-Agent Systems, Agent Environments, Foundation for Intelligent Physical Agents, Mobile Agent System Interoperability Facility, Knowledge Query and

For that reason we propose in this article a multi-level model for multi-agent simulations (SIMLAB stands for SIMLAB Is Multi-Level Agent Based) in which various level agents

Constructive objectivity was originally described by Brouwer in terms of the mental process dividing the present in two opposite parts, the before and the after,

Abstract— This article focuses on the problem of leader- following consensus of second-order Multi-Agent Systems (MAS) with switching topology and partial aperiodic sam- pled data..

proposed a formalism based on hypergraphs to represent directed SoSs [14] but they did not provide generic mechanisms to ensure that the constitutive SoS characteristics are

Ethical issues in autonomous systems can be addressed according to different points of view: from the philosophical foundations of ethics (Lacan, 1960) to regulation mechanisms

Complex information tasks shift the burden of information processing to al- gorithms that need to reason over heterogeneous data and perform constraint resolution, reasoning,

Despite the importance of using metrics to assess some characteristics of MASs, like communication (Gutiérrez and García-Magariño, 2009) and architectural