Preferencemodelling, aggregationand exploitation constitute three main steps in elaborating an evaluation model within a decision aiding process [Bouyssou et al., 2006]. In the preference mod- elling step we are interested in finding a suitable way to translate “preference statements” (of the type “I prefer x to y”), expressed by a decision maker, into formal statements enabling to establish an evaluation model for decision support purposes. We then may need to aggregate such pref- erence models in the case they represent several criteria or opinions. The result is then used in the exploitation step where we try to establish a final recommendation for a choice or a ranking problem.
4 Ceteris Paribus as structural preference
In order to be able to provide a final recommendation to a decision maker, we have to solve a preferenceaggregation problem. With this term we refer to the problem of establishing an overall preference relation (an order on the set of out- comes) taking into account all the criteria the decision maker considers relevant to his problem. Unfortunately there is no universal way to solve this problem (see  and ). Basically, what we know is that, under looser conditions on the type of preferences to aggregate and properties to satisfy by the final result, the resulting preference relation is not an order (neither completeness nor acyclicity can be guaranteed: see ). If the stake is to obtain rapidly a reasonable recom- mendation, we have to simplify both the possible types of preference statements that can be modelled and aggregated, and the aggregation procedure itself. For this purpose, in this paper we have chosen the use of the CP-nets formalism which guarantees an efficient computation of a final result, although it is less expressive than other frameworks.
domains using GAI-nets
Christophe Gonzales ∗ , Patrice Perny ∗ , Sergio Queiroz ∗
This paper deals with preference representation andaggregation in combinatorial domains. We assume that the set of alternatives is defined as the cartesian product of finite domains and that agents’ preferences are represented by generalized additive decomposable (GAI) utility functions. GAI functions allow an efficient representa- tion of interactions between attributes while preserving some decomposability of the model. We address the preferenceaggregation problem and consider several crite- ria to define the notion of compromise solution (maxmin, minmaxregret, weighted Tchebycheff distance). For each of them, we propose a fast procedure for the exact determination of the optimal compromise solution in the product set. This proce- dure relies on a ranking algorithm enumerating solutions according to the sum of the agents individual utilities until a boundary condition is reached. We provide results of numerical experiments to highlight the practical efficiency of our procedure.
600 mL of a 2 M sodium chloride solution was firstly poured into the tank and put under stirring. When the desired operating temperature was reached, 900 mL of a suspension of colloidal silica at a chosen concentration was added as instantaneously as possible to obtain the desired solid concentration. In such conditions, the final salt concentration was 0.8 M. The suspension was continuously recycled from the stirred batch reactor through the flow cell of the acoustic spectrometer using a peristaltic pump (Masterflex) that allows on-line size analysis. However, the effect of the pump on the aggregates needed to be considered. The main criterion to select the pump and to adjust the flow rate until there was minimum shear on the aggregates. As a result, a peristaltic pump was chosen in preference to other types of pumps. Moreover, the pump was placed downstream of the Ultrasizer, allowing the aggregates to be sampled before passing through the pump.
The present analysis of agenda sensitivity …lls a gap in the literature on judgment aggregation, in which agenda sensitivity/manipulation is often mentioned informally and was treated in a semi-formal way by Dietrich (2006). 3 Other types of manipulation have however been much studied. One type is the manipulation of the aggregation rule, more precisely of the order of priority in which a sequential aggregation rule considers the propositions in the agenda (List 2004, Dietrich and List 2007c, Nehring, Pivato and Puppe 2014). Another type of manipulation is strategic voting, in which voters do not report truthfully their judgments. Strategic voting has been studied using two di¤erent approaches. One approach focuses on opportunities to manipulate, setting aside the behavioural question of whether voters take these opportunities or vote truthfully (e.g., Dietrich and List 2007b, Dokow and Falik 2012). The other approach focuses on incentives to manipulate, i.e., on actual voting behaviour (e.g., Dietrich and List 2007b, Dokow and Falik 2012, Ahn and Oliveros 2014, Bozbay, Dietrich and Peters 2014, DeClippel and Eliaz 2015; see also Nehring and Puppe 2002). The …rst approach requires only a basic, preference-free judgment-aggregation setup, whereas the second approach requires modelling voters’preferences (and their private information, if any). The present paper studies whether an agenda setter has opportunities to manipulate via the choice of agenda. I leave open whether he is himself a voter or an external person, and whether he takes such opportunities or refrains from manipulation. The latter question depends on his preferences, which are not modelled here. Although manipulation behaviour is not addressed explicitly, it is overly clear that manipulation opportunities will lead to manipulation behaviour under many plausible preferential assumptions. 4
replaced by its estimate computed from the sample, and direct methods which are based on empirical excess-mass maximization (see Hartigan, 1987; M¨ uller and Sawitzki, 1987).
While local versions of direct methods have been deeply analyzed and proved to be optimal in a minimax sense, over a certain family of well-behaved distributions (see Tsy- bakov, 1997), and although reasonable implementations have been recently proposed (see for instance Steinwart et al., 2005), they are still not very easy to use for practical pur- poses, compared to plug-in methods. Indeed, it is common to estimate density level sets for different level values -typically when the goal is to compute a density level set of pre- specified probability mass (or acceptance rate) and unknown density level. In that case, using direct methods, one has to run an optimization procedure several times, one for different density level values, then choose a posteriori the most suited level according to the desired rejection rate. Plug-in methods do not involve such a complex process: the density estimation step is only performed once and the construction of a density level set estimate simply amounts to thresholding the density estimate at the desired level.
This study concerns the effect of particle aggregation on the laser heating rate of soot aggregates in laser-induced incandescence. Three aggregate absorption models were investigated: the Rayleigh-Debye-Gans approximation, the electrostatics approximation, and the numerically exact generalized multi-sphere Mie-solution method. Fractal aggregates containing from 5 to 893 primary particles of 30 nm in diameter were generated numerically using a combined particle- cluster and cluster-cluster aggregation algorithm with specific fractal parameters typical of soot. The primary particle size parameters considered are 0.089, 0.177, and 0.354, respectively. The Rayeligh-Debye-Gans approximation underestimates the aggregate absorption area by approximately 10%, depending on the aggregate size and primary particle size parameter. The electrostatics approximation is somewhat better than the Rayleigh-Debye-Gans approximation, but cannot account for the effect of primary particle size parameter. The aggregate absorption submodel affects the calculated soot temperature in laser-induced incandescence mainly in the low laser fluence regime. At high laser fluences, the effect diminishes due to enhanced importance of soot sublimation cooling and neglect of aggregation effect in the sublimation.
Certain limitations associated with the DCE methodol- ogy should be considered. This powerful multidimensional tool is used to analyze simultaneously the influence of multiple attributes, the ORs providing information about the relative importance of each attribute. However, the performed analysis does not allow comparison of the ORs between the different attributes. It is also difficult to compare the importance of different attributes, expressed in different units. For example, the viral load is expressed as copies/mL, whereas other attributes are presented as probabilities or categorical variables. Finally, although the chosen attributes and their levels had been selected from a literature review and from discussion with clinicians and expert patients, it is possible that other characteristics of treatments influencing patient preferences were not evalu- ated. One of the main limitations of the qualitative part of the study is that the trends and themes developed are only representative of those patients in the study sample and may not be generalized to represent the views of PLWH across the whole country.
In contrast, Suzumura consistent dominance revelation coherence is not sufficient for dominance rationalizability by a reflexive, complete and Suzumura consistent relation. This is an immediate consequence of the observation that Suzumura consistency and transitivity coincide in the presence of reflexivity and completeness; see Suzumura (1976b). The reason why we focus on Suzumura consistency as the weakening of transitivity to be considered is that properties such as quasi-transitivity or acyclicity cannot be treated in an analogous fashion. This is the case because there is no such thing as a quasi-transitive or an acyclical closure: if a relation fails to be quasi-transitive or acyclical, there is no unique way of defining a unique superset of this relation that possesses the requisite property. For instance, if x is strictly preferred to y, y is strictly preferred to z and z is strictly preferred to x, the resulting relation clearly is not acyclical (and, of course, not quasi-transitive). In order to obtain a superset of this relation that is acyclical, one of the pairs (y, x), (z, y) or (x, z) has to be added to the original relation, but any one of the three possibilities will do. Analogously, to obtain a quasi-transitive superset of the relation, two of the three pairs need to be added but, again, any two will do the job. Thus, there is no well-defined closure operation for these properties and, as a consequence, a condition that demands such a closure to be respected cannot be formulated. This observation also applies to dominance rationalizability by itself: because there does not exist a complete closure of a relation, our condition does not work if we want to obtain dominance rationalizability by a reflexive and complete relation. See Bossert and Suzumura (2010) for a detailed discussion of these issues in the traditional rational choice framework without uncertainty.
other players’ strategies are then expressed in a conditional preference table. The CP-net expression of the game can sometimes be more compact than its normal form explicit representation, provided that some players’ preferences depend only on the actions of a subset of other players. A first important difference with our framework is that we allow players to control an arbitrary set of variables, and thus we do not view players as variables; the only way of expressing in a CP-net that a player controls several variables would consist in introducing a new variable whose domain would be the set of all combination of values for these variables—and the size of the CP-net would then be exponential in the number of variables. A second important difference, which holds as well for the comparison with  and , is that players can express arbitrary binary preferences, including extreme cases where the satisfaction of a player’s goal may depend only on variables controlled by other players. A last (less technical and more foundational) difference with both lines of work, which actually explains the first two above, is that we do not map normal form games into anything but we express games using a logical language.
Abstract. CP-nets (Conditional preference networks) are a well-known compact graphical representation of preferences in Artificial Intelligence, that can be viewed as a qualitative counterpart to Bayesian nets. In case of binary attributes it captures specific partial orderings over Boolean interpretations where strict preference statements are defined between interpretations which differ by a single flip of an attribute value. It re- spects preferential independence encoded by the ceteris paribus property. The popularity of this approach has motivated some comparison with other preference representation setting such as possibilistic logic. In this paper, we focus our discussion on the possibilistic representation of CP- nets, and the question whether it is possible to capture the CP-net partial order over interpretations by means of a possibilistic knowledge base and a suitable semantics. We show that several results in the literature on the alleged faithful representation of CP-nets by possibilistic bases are questionable. To this aim we discuss some canonical examples of CP-net topologies where the considered possibilistic approach fails to exactly capture the partial order induced by CP-nets, thus shedding light on the difficulties encountered when trying to reconcile the two frameworks.
The predicted soot temperature histories at a higher laser fluence of F 0 ¼ 1.5 mJ/mm 2 are compared in Fig. 3 . It is
somewhat surprising to observe that at this relatively high laser fluence different treatments for the aggregate absorp- tion have only a small impact on the peak soot temperature and negligible impact on soot temperatures right after the peak. Examination of the numerical results indicates that the peak soot temperatures (reached around t ¼ 13 ns) predicted by the three absorption submodels for N ¼ 1–500 differ by less than 66 K. At t ¼ 400 ns, soot temperatures of different aggregate sizes display somewhat larger differences. However, for a given aggregate size the soot temperatures predicted by the three absorption sub- models differ by less than 7 K at t ¼ 400 ns. These observa- tions can be explained as follows. The smaller deviations among the peak soot temperature of different aggregate sizes and different absorption submodels are caused by the enhanced soot sublimation cooling, which is significant around 4250 K. At low laser fluences, the peak soot temperature is mainly governed by the balance between the internal energy variation rate and the laser heating rate, since heat conduction cooling rate is small compared to laser heating. At high laser fluences, however, the peak soot temperature is also affected by sublimation cooling, which is very high and hence somewhat diminishes the differences in the laser heating rate due to different absorption submodels. Between shortly after the peak temperature and about 50 ns, soot temperatures of different aggregates sizes calculated by different absorption submodels are essentially identical. It is recognized that these results should be viewed with caution since the effect of aggregation on the soot aggregate sublimation process is neglected, see the sublimation term in Eq. (1). At longer times (after 50 ns), soot temperatures
Advantages and drawbacks
The Borda path
You have values. If correctly adopted, you have real values (meaningful measures of differences of preferences. Easy to bias. Easy to use it making big mistakes ... Too much axioms to satisfy.
VI. C ONCLUSION
In this paper, we introduced the problem of preference fusion with uncertainty degrees and proposed two belief- function-based strategies, one of which is applying the con- junctive rule on clusters, and which scales better with the number of sources. We also proposed a Condorcet Paradox avoidance method as well as an efficient DFS-based algo- rithm adapting to preference structure with nested circles. By comparing the temporal performance of the Condorcet Paradox avoidance algorithms on different types of preference structures, we noticed that the incremental algorithm is more efficient on nested structures while the naive algorithm is better on non-nested ones. Limited by our data sources, our experimental works were done on synthetic data. Furthermore, the algorithm for DAG construction can be applied in more general cases, other than those related to preference orders. In domains concerning directed graph with valued edges (e.g. telecommunication, social network analysis, etc.), algorithm 2 may find its usefulness.
demand a complete joint probability distribution over all possible states and actions to be pre-defined.
One potential source of missing information is the possible actions that other players contemplate. This amounts to not knowing the set of possible player types and the probability distribution over them, which traditional models of games rely on. Arguably, the board game The Settlers of Catan (or Settlers) has this feature. Two to four players build settlements and cities connected by roads on the island of Catan. They must use certain resources (clay, ore, sheep, wheat and wood) in order to build; e.g., a road requires 1 clay and 1 wood. Victory points get awarded to players in several ways: e.g., by building a settlement (1 point) or a city (2 points). It is a win lose game: the first player with 10 victory points wins. 3 There are several thousand end states but it is always clear who wins; so there is common knowledge about each player’s intrinsic preferences. But players often negotiate trades with one another in order to obtain the resources they need. Players can agree to any trade, which makes the game tree non-enumerable: there are an unbounded number of possible trades because agents can promise to perform a particular future move as a part of the trade, e.g., If you trade clay for wood now, I will give you wheat when I get it (see Section 3.1.2 for the description of a corpus of humans playing Settlers). Natural language also provides an unbounded way of expressing such trades. 4 They can lie or bluff, too.
by comparing composite symbolic possibility values. We have shown the connections existing between π -pref nets and CP-nets, showing that while π -pref nets capture the Pareto ordering between configurations described by vectors of local satisfaction degrees, the ceteris paribus assumption of CP-nets can be modeled by adding new constraints between products of symbols appearing in the π -pref net preference tables. In some sense, π -pref nets are a more flexible approach to pref- erence modeling than CP-nets. In particular π -pref nets can express conditional indifference as well (as, e.g., in Example 1 ). Besides, as possibilistic networks, their contents can be put equivalently under a logical form  . Lastly, we have laid bare the question of determining whether the CP-net preference ordering always refines the one induced by π -pref nets, which remains an open problem.
More generally, the model under consideration introduces a theory of consumption that leads to two distinct types of behaviours. More explicitly, an individual would consume his whole permanent income if he was to undertake his consumption choice in a close neighbourhood of the unsatiated steady state. In opposition with this somewhat standard result, his consumption behaviour would change as soon as he completes his decision in the neighbourhood of the satiated state : his current consumption becomes unrelated to his permanent income but oppositely fully determined by his past consumption choices. Interestingly, this latter configuration is reminiscent of the alternative approaches raised by J. Duesenberry and T. Brown half a century ago. It is indeed worth recalling that whilst Duesenberry advocated by a theory where current consumption was determined by a benchmark level of income, namely the maximal one reached by the individual in his lifetime, Brown raised by an approach where past consumption levels emerge as the main determinant of current consumption behaviours.Though both of those formulations can be compared to numerous features of the current theory of consumption, the latter strongly differs by being based upon fully rational decisions, his main features being further directly understood from the ordinal features of the rate of time preference that was omitted by these early theories.
Indecision is the key to flexibility. Proverb
The primitive of the theory of choice among opportunity sets is a preference relation defined on a collection X of subsets of a given space of alternatives. These subsets are interpreted as “menus” from which an alternative will be selected at some later (unmodeled) stage. With this dynamic interpretation in mind, Kreps  introduced a monotonicity property called “preference for flexibility,” which states that a decision maker (henceforth, DM) should weakly prefer a given menu to any proper subset of it. This property appears particularly appealing when the DM faces unforeseen contingencies and has become a fairly common postulate in the menu choice literature. 1 Yet, there are many situations in life where an agent may strictly prefer smaller menus to larger ones, for instance if he suffers from temptation ` a la Gul and Pesendorfer  or if he anticipates regret as in Sarver . 2 Because they typically focus on a single psychological phenomenon, most models of menu choice allow for either preference for flexibility or commitment concerns, but not both. In this paper, we investigate the extent to which both concerns may coexist within a single framework, provided one imposes some discipline on the way those concerns may emerge.
preferences varies across socio-demographic and ideological groups and types of electoral systems. We end with a discussion of the implications of our findings.
Why study party preference representation?
To capture the quality of representation, scholars generally measure the congruence between citizens’ policy preferences, or their ideological orientations, and those of decision-makers. Such work, however, has to make tough decisions about which issues to take into account or which ideological dimensions to consider for measuring congruence. As a result, most previous studies either focus on the correspondence between public opinion and the government’s or parliament’s position along the left-right dimension (Golder and Lloyd 2014; Golder and Stramski 2010), or they investigate congruence in a specific policy domain, such as welfare spending (Kang and Powell 2010; Hooghe et al. 2019). When studying these questions comparatively, such an approach has two kinds of limitations. First, in many countries, the political space is characterized by more than one dimension (Bakker et al. 2015) and voters care about more than a single left-right dimension (Stecker and Tausendpfund 2016). Second, the meaning of left and right differs between countries (Piurko et al. 2011) and over time (de Vries et al. 2013).