Whatever their domain of interest (be it certainty, risk, or uncertainty), **decision** theorists are primarily concerned with analyzing **decision** models. A **decision** model can be thought of as an algorithm for evaluating options. In the case of risk, examples include computing the expected value of some util- ity function on the set of possible results, calculating this expectation with respect to some transformation of the known probabilities, or proposing some way of combining both the expectation and the variance of the utility val- ues. More often than not, **decision** theorists introduce or even discover models directly from such a numerical perspective. But their specific task is to char- acterize each numerical form of evaluation by a few basic properties, namely, those displayed by the preferences of a **decision**-maker to whom the exam- ined model would apply. This requires, if possible, proving a representation theorem showing how the numerical evaluation reflects structural aspects of the underlying preferences. To this extent, **decision** **theory** is essentially a development of representational measurement **theory**. More generally, it is an application of the axiomatic method. 1

En savoir plus
Abstract—A control logic has a central role in many echo can- cellation systems for optimizing the performance of adaptive fil- ters, while estimating the echo path. For reliable control, accurate double-talk and channel change detectors are usually incorporated to the echo canceler. This work expands the usual detection strat- egy to define a classification problem characterizing four possible states of the echo canceler operation. The new formulation allows the use of **decision** **theory** to continuously control the transitions among the different modes of operation. The classification rule re- duces to a low-cost statistics, for which it is possible to determine the probability of error under all hypotheses, allowing the clas- sification performance to be accessed analytically. Monte Carlo simulations using synthetic and real data illustrate the reliability of the proposed method.

En savoir plus
3.1 Background on possibility **theory** and defaults
The use of possibility **theory** as a basis for qualitative **decision** **theory** was introduced by Dubois and Prade (see [7]). The idea is to define the expected pay-off u(x) of a situation x. In this **theory**, it is supposed that there exists a linear ordering over the situations which gives a preference relation over situations s.t. x y iff u(x) ≥ u(y). When situations are not precisely known, the belief state about what is the actual situation is represented by a possibility distribution π. The **theory** of possibility was introduced by Zadeh [16] and was further developed by Dubois and Prade in [6]. It is well adapted for representing partial ignorance and it is qualitative in the sense that a possibility distribution π defined on a set of situations X takes its values on a valuation scale V where max, min and order-reversing operations are defined. However, it is usual to use some numbers for representing this scale without losing the qualitative aspect (since the exact values of the numbers are not meaningful, it is only their order in the scale that is taken into account). The usual convention is to set sup V = 1 and inf V = 0. Writing π(x) ≤ π(x ′ ) means that it is at least

En savoir plus
cellation systems for optimizing the performance of adaptive fil- ters, while estimating the echo path. For reliable control, accurate double-talk and channel change detectors are usually incorporated to the echo canceler. This work expands the usual detection strat- egy to define a classification problem characterizing four possible states of the echo canceler operation. The new formulation allows the use of **decision** **theory** to continuously control the transitions among the different modes of operation. The classification rule re- duces to a low-cost statistics, for which it is possible to determine the probability of error under all hypotheses, allowing the clas- sification performance to be accessed analytically. Monte Carlo simulations using synthetic and real data illustrate the reliability of the proposed method.

En savoir plus
towards the concept of **decision** support. I will try to show that these directions do not diverge, but rather that they have several common points and potential areas of convergence.
As in other empirical sciences, operational research and **decision** **theory** en- tered their first official “crisis” for a practical reason. Towards the end of the 60s, the British OR society wanted to create a kind of “chartered directory of OR profes- sionals”. The reason was simple: provide the practitioners of the domain a quality label allowing the discipline and its practice to be better promoted. Not surpris- ingly ORSA (in the USA) published, almost at the same time, its suggestion about the “the guidelines of OR practice” (see[229]). The initiative was followed by sev- eral questions: what are the boundaries of the “discipline” and how to fix them? Using the existing methods? Who decides whether a **decision** support method be- longs to the discipline? Given a new method, how will it be legitimated to enter these boundaries? The difficulty in finding convincing answers to these questions let appear the differences between diverse **decision** theories and their critics. For the history, this debate reached a conclusion(?) only very recently (the British so- ciety finally modified its statutes in order to create the above mentioned directory in 2001!!!).

En savoir plus
Abstract—A control logic has a central role in many echo can-
cellation systems for optimizing the performance of adaptive fil- ters, while estimating the echo path. For reliable control, accurate double-talk and channel change detectors are usually incorporated to the echo canceler. This work expands the usual detection strat- egy to define a classification problem characterizing four possible states of the echo canceler operation. The new formulation allows the use of **decision** **theory** to continuously control the transitions among the different modes of operation. The classification rule re- duces to a low-cost statistics, for which it is possible to determine the probability of error under all hypotheses, allowing the clas- sification performance to be accessed analytically. Monte Carlo simulations using synthetic and real data illustrate the reliability of the proposed method.

En savoir plus
On some ordinal models for **decision** making under uncertainty
1 Introduction
The specific needs of Artificial Intelligence techniques have led many Computer Scien- tists to propose models for **decision** under uncertainty that are at variance with the classi- cal models used in **Decision** **Theory**, i.e. the Subjective Expected Utility (SEU) model and its many variants (see Fishburn, 1988; Wakker, 1989, for overviews). This gives rise to what is often called “qualitative **decision** **theory**” (see Boutilier, 1994; Brafman and Ten- nenholtz, 1997, 2000; Doyle and Thomason, 1999; Dubois et al., 1997, 2001; Lehmann, 1996; Tan and Pearl, 1994, for overviews). These models aim at obtaining simple **decision** rules that can be implemented by efficient algorithms while based on inputs that are less rich than what is required in traditional models. This can be achieved, e.g. comparing acts only on the basis of their consequences in the most plausible states (Boutilier, 1994; Tan and Pearl, 1994) or refining the classical criteria (Luce and Raiffa, 1957; Milnor, 1954) for **decision** making under complete ignorance (see Brafman and Tennenholtz, 2000; Dubois et al., 2001).

En savoir plus
296 En savoir plus

The classic approach in **decision** **theory** is straightforward. To each diagnosis (the states of the nature) is associated a probability and to each treatment (the potential actions) the respective outcomes. Using any of the standard protocols for constructing the client’s value function on the set of the outcomes we are able to define an utility function (including uncertainty), which, when is maximised, identifies the solution which should be adopted (since by definition is the one which maximises the client’s expected utility). The existence of such a function is guarantied thanks to a certain umber of axioms which represent what, following the **theory**, should be the principles of a rational **decision** maker’s behaviour. Preferences are supposed to be transitive (and complete) since the presence of cycles would imply that the **decision** maker will be ready to infinitely increase what is he ready to pay for any of the solutions and this is of course against any idea of rationality. For the same axioms probabilities are independent. It should be noted that there has been no observation of the client behaviour nor it has been posed the question of what other **decision** makers do in similar situations. It is the **decision** maker who has to adapt himself and his behaviour to the axioms. Otherwise he is not rational and the information and his preferences ought to be modified. I will call such an approach normative.

En savoir plus
The use of possibility **theory** as a basis for qualitative **decision** **theory** was introduced by Dubois and Prade (see [7]). The idea is to define the expected pay-off u(x) of a situation x. In this **theory**, it is supposed that there exists a linear ordering over the situations which gives a preference relation over situations s.t. x y iff u(x) ≥ u(y). When situations are not precisely known, the belief state about what is the actual situation is represented by a possibility distribution π. The **theory** of possibility was introduced by Zadeh [16] and was further developed by Dubois and Prade in [6]. It is well adapted for representing partial ignorance and it is qualitative in the sense that a possibility distribution π defined on a set of situations X takes its values on a valuation scale V where max, min and order-reversing operations are defined. However, it is usual to use some numbers for representing this scale without losing the qualitative aspect (since the exact values of the numbers are not meaningful, it is only their order in the scale that is taken into account). The usual convention is to set sup V = 1 and inf V = 0. Writing π(x) ≤ π(x ′ ) means that it is at least

En savoir plus
Finally, we have proposed possibilistic influence diagrams to deal with huge **decision** problems where **decision** trees cannot be generated. More precisely, we have identified several types of possibilistic influence diagrams depending on the quantification of chance and value nodes. To evaluate possibilistic influence diagrams, we have proposed two indirect methods based on their transformation into a secondary structure. The first one transforms possibilistic influence diagrams into possibilistic **decision** trees and the second one transforms them into possibilistic networks. It is important to note that in the case of possibilistic Choquet integrals, possibilistic influence diagrams cannot be transformed into possibilistic networks since propagation algorithms are a form of dynamic programming This means that for this particular case it is more appropriate to transform the influence diagram into a **decision** tree and to evaluate it via the Branch and Bound algorithm.

En savoir plus
161 En savoir plus

To address this problem, several countries have developed or initiated the development of bridge management systems (BMS) to optimize the inspection and maintenance of deteriorated structures. Different approaches to maintenance optimization have been implemented in these systems ranging from simplified economic models to sophisticated Markovian **decision** processes. The development of a practical and effective bridge maintenance management system depends primarily on the existence of reliable performance prediction models and effective optimization algorithms. Given the time-dependence and uncertainty of bridge performance, a stochastic modeling is required. Furthermore, bridge maintenance management aims at improving the overall performance of a bridge or a network through the satisfaction of several and possibly conflicting objectives, which may include the minimization of maintenance costs, maximization of network condition, minimization of risk of failure, minimization of bridge closures, etc. Multi-criteria optimization techniques provide a practical tool for optimal prioritization of bridges for maintenance.

En savoir plus
(ii) rank the alternatives in various classes (being or not exclusive) using an antisymmetric fuzzy preference graph.
Keywords : Multiple criteria **decision** making, Mean aggreation procedure, Valued preference relations, Choice functions.

So, my query to determine the utility of the attorney concerning the decision under investigation was usually this: "Con- sider all the possible courses of action [r]

among subjects.
Other studies have looked at non-student population. Guiso and Jappelli (2008) conducted a survey on some Italian bank’s clients. They find a positive correlation between answers to questions about risk and imprecision attitude. They relate this to modes of **decision** making (intuitive vs. reasoned). Cabantous (2007) surveyed insurance professionals and found that imprecision aversion was pervasive in this population. She also finds that sources of ambiguity (conflict of expert opinion or imprecision) matter. Burks, Carpenter, Gotte, and Rustichini (2008) use data collected among truck drivers and show that there is a positive and strong correlation between risk and ambiguity aversion. They show that a common factor, cognitive ability, explain many features of these subjects. Potamites and Zhang (2007) present a field experiment on ambiguity aversion among investors in China. Their data shows a substantial heterogeneity in ambiguity aversion among this population, ranging from high ambiguity aversion to ambiguity seeking. Akay, Martinsson, Medhin, and Trautmann (2009) ran experiment on two populations: the usual western students population and Ethiopian peasants. They find similar ambiguity aversion in the two populations (while Ethiopian peasants are much more risk averse). Poor health increases both ambiguity and risk aversion. Keller, Sarin, and Sounderpandian (2007) examine willingness to pay for gambles involving risk and ambiguity made by individuals and dyads (marriage partners, business partners) who exhibit more complex attitudes toward risk and ambiguity. They find that dyads display risk aversion and ambiguity aversion.

En savoir plus
In this case, a public buyer would select a less efficient firm using the adapted procedure because he would spend less in the selection process.. Put differently, the loss in productivi[r]

204 En savoir plus

Operation research is a discipline with wide range of concepts and logics from mathematical modelling and programming to efficiency and productivity measurement in order to aid us in complex and parametric **decision** problems. Multiple criteria **decision** making (Thery and Zarate, 2009; Yazdani et al. 2017) family as a major category of operation research has been discussed in order to facilitate evaluation and selection problems. Adopted algorithms, integrated formulas along to mathematical and logical approaches lead to the development of **decision** making methods. Multiple criteria **decision** making (MCDM) forms a perspective in **decision** **theory** which facilitates business processes in practice (Ghorabaee et al. 2017; Mardani et al. 2016) Zavadskas and Turskis (2011) believe developing economics, changing environment, sustainability of decisions are the reasons to develop new operation research techniques and specifically **decision** making approaches. A **decision** support system is defined a database, algorithm and user interface within a computer or operating system which can handle the whole **decision** making process in a visualized form. It can enhance quality and reliability of **decision** system.

En savoir plus
Autonomous **decision**-making. In recent years, argumentation has been promoted as a primary technique to support autonomous **decision** making for agents acting in multiagent environment. Kakas et al. [66] present an argu- mentation based framework to support the **decision** making of an agent within a modular architecture for agents. The proposed framework is dynamic as it allows the agent to adapt his decisions in a changing environment. In addition, abduction was integrated within this framework in order to enable the agent to operate within an environment where the available information may be in- complete. Parsons and Jennings [93] summarise their work on mixed-initiative **decision** making which extends both classical **decision** **theory** and a symbolic **theory** of **decision** making based on argumentation to a multi-agent domain. One focus of this work is the development of multi-agent systems which deal with real world problems, an example being the diagnosis of faults in elec- tricity distribution networks. Sillince [126] has investigated conflict resolution within a computational framework for argumentation. The author analysed how agents attempt to make claims using tactical rules (such as fairness and commitment). The system does not require truth propagation or consistency maintenance. Indeed, agents may support inconsistent beliefs until another agent is able to attack their beliefs with a strong argument. Parsons et al. in [94] try to link agents and argumentation using multi-context systems [60]. In this approach agents are able to deal with conflicting information, making it possible for two or more agents to engage into dialogue to resolve conflicts between them. Sycara [129, 130] developed PERSUADER , a framework for intelligent computer-supported conflict resolution through negotiation and me- diation. She advocates persuasive argumentation as a mechanism for group problem solving of agents who are not fully cooperative. Construction of ar- guments is performed by integrating case-based reasoning, graph search and approximate estimation of agent’s utilities. The paper of Sierra et al. [125] describes a general framework for negotiation in which agents exchange pro- posals backed by arguments which summarise the reasons why the proposals should be accepted. The framework is inspired by the work of the authors in the domain of business process management and is explained using examples from that domain.

En savoir plus
The additional information will be the input of an inference model permitting to retrieve the preferential parameters of the sorting model (here ELECTRE TRI). This permits to reduce the cognitive effort of the **decision** maker. In this paper, we have adapted the inference model developed by Mousseau and Slowinski (1998) (also see Mousseau (2005)). Their approach starts with some simple assignment examples provided by the **decision** maker and then an optimisation algorithm is applied for inferring the required parameters from these assignment examples. This approach permits to construct the preference model that better resituate the aspirations of the **decision**-maker and reduce sensibly his/her cognitive effort. Finally, it is important to note that the additional information should not modify the consistency of the preference parameters. Often, several iterations are required to obtain pertinent additional information.

En savoir plus
Participants can choose to view past concept maps, they can import text labels of concepts from these maps into the chat conversations and link the text messages in the chat[r]

In the case of two unknown payoffs, it is assumed that once one of them is received both players play in an optimum fashion; therefore, the remaining mean loss[r]