proving the corollary.
If the scale of grades is not sufficient—i.e., a voter is forced to give a same grade to two candidates whereas she has a preference between them—the theorem does not apply since it is impossible to deduce the ranking from the grades (a same grade for two candidates could bear three meanings, preference for one of the two or indifference). For example, with two grades MJ becomes approval voting, a voter may Approve both candidates or Disapprove both without actually being indifferent between them, so that agreement with **majority** **rule** calculated on the basis of preferences is impossible even when the electorate is polarized, and there is no guarantee that AV agrees with MR. For example, if Approve means Good or better in the polarized profile of Poutou versus Le Pen (Table 19 ), approval voting elects Le Pen with 32.16% not Poutou with 13.71% in disagreement with MR (and so MJ with the full complement of grades). Thus, to guarantee agreement with the **majority** **rule** on polarized pairs, the scale of grades must be rich enough to faithfully represent the preferences and indifferences. The optimal number of grades to use depends on the application.

En savoir plus
the diﬀerence between our type of restrictions and those proposed by other authors. There is a fundamental diﬀerence between our setup and all the others we mention (with the exception of Grandmont’s). In our case, the set of orders of the alternatives which are admissible may be diﬀerent for each of the agents. Indeed, each voter is allowed to have at most as many indiﬀerence classes as the number of members in his partition of objective indiﬀerence classes. Hence, agents are allowed to have diﬀerent sets of pref- erences. Under our restrictions, admissible domains are personalized. By contrast, the classical restrictions we now brieﬂy review do limit the set of preferences which are admissible, but then allow all agents to exhibit any of the preferences in this common pool. Inada (1964) considered the case where each agent can classify the set of alternatives into two groups, and then will consider all alternatives within the same group as indiﬀerent. He was concerned with transitivity of the **majority** **rule** and showed that if a proﬁle satisﬁes n DP (for each agent there is a pair which he regards indif- ferent), then **majority** **rule** is transitive. Obviously n DP implies (n − 1) DP but not vice versa. Furthermore, note that Inada (1964) deals with tran- sitivity of **majority** **rule** whereas we deal with quasi-transitivity and Inada (1964) did not search for necessary conditions. Our conditions also rely on the establishment of “large” indiﬀerence classes, but the analogy stops here. Another interesting set of restrictions were proposed by Sen and Pattanaik (1969) and Inada (1969). Let R ∈ R N and {a, b, c} be a triple of alternatives. The proﬁle R satisﬁes value restriction (VR) for the triple {a, b, c} if there is one alternative in the triple, say a, that is not ranked worst (or best or medium) by all individuals who are not indiﬀerent between a, b, and c (i.e. (for all i ∈ N such that ¬aI i bI i c, aP i b ∨ aP i c) or (for all i ∈ N such that

En savoir plus
May’s ( 1952 ) condition of “positive responsiveness”, a variant of Arrow’s “positive as- sociation of social and individual values” ( 1963 , p. 25), is a key component of his char- acterization of **majority** **rule**. A generalization of this condition to an environment with many alternatives plays a key role also in our analysis. Many generalizations of the con- dition are possible. Suppose that the alternatives x and y (among possibly others) are both selected by a collective choice **rule**, in a “tie”. The spirit of May’s condition is that an improvement in one individual’s ranking of one of these alternatives relative to the other breaks the tie. Specifically, if one individual’s preferences change from ranking x below y to ranking x above y , then the collective choice **rule** should still select x but no longer select y .

En savoir plus
the social rankings. Such results are not surprising since our results are similar to a random selection when there is no structure in the data.
5 Conclusion and future work
In this paper, we presented some new results on the feasibility and the expected results of the implementation of the CP-**Majority** principle for social ranking. We analyzed the probability of having Condorcet cycles and presented an LP model in order to have a transitive social ranking as close as possible to a CP-**majority** **rule**. We addressed also the learning of a CP-**majority** like **rule** using a subset of coalitions as voters. We obtained interesting results for small n. Further simulations must be done with bigger n and different types of data.

En savoir plus
Abstract. We consider the problem of eliciting a model for ordered classifica- tion. In particular, we consider **Majority** **Rule** Sorting (MR-sort), a popular model for multiple criteria decision analysis, based on pairwise comparisons between alternatives and idealized profiles representing the “limit” of each category. Our interactive elicitation protocol asks, at each step, the decision maker to clas- sify an alternative; these assignments are used as training set for learning the model. Since we wish to limit the cognitive burden of elicitation, we aim at asking informative questions in order to find a good approximation of the optimal clas- sification in a limited number of elicitation steps. We propose efficient strategies for computing the next question and show how its computation can be formulated as a linear program. We present experimental results showing the effectiveness of our approach.

En savoir plus
Keywords: **majority** **rule**, median graph, tree, Condorcet winner, intermediate preferences JEL Classification D720, D710
1 Introduction
The **majority** **rule** is a prominent voting **rule** if any. Though, there are difficulties due to the possibility of **majority** cycles and the non-existence of a **majority** winner, as illustrated by the famous Condorcet paradox. But difficulties are unavoidable as they are bound to arise in some form with any non dictatorial **rule**, as shown by Arrow (1963). Not surprisingly, starting with Black (1948), a large literature tries to find conditions under which the **majority** **rule** is well-behaved. This paper provides an additional contribution to this literature. It displays families of preferences that guarantee the transitivity of the **majority** **rule**, meaning that the **majority** **rule** is transitive no matter what the profile of individuals’ preferences in the family. Preferences are characterized by a parameter and satisfy two conditions, one on the parameter space and one on how these preferences depend on the parameter. Specifically, the parameter space is a median graph and the preferences satisfy an intermediateness assumption, terms that will be explained below. Under these conditions,

En savoir plus
Following Black (1948), various restrictions on preferences over a one-dimensional set of alternatives have been introduced and shown to guarantee the existence of a Condorcet winner: single-peakedness, single-crossing, order restriction, and recently top-monotonicity by Barbera and Moreno (2009), which encompasses all of them (we refer the reader to their paper for precise definitions of these restrictions and their comparison). A one-dimensional set of alternatives is however a strong limitation. Unfortunately, extending these positive results to a multi-dimensional space turns out to be disappointing. Not only the extensions of the previous properties -say single- peakedness- fail to guarantee the existence of a Condorcet winner but also **majority** cycles are pervasive. A Condorcet winner exists under very specific configurations on the profile of these preferences (Kramer 1973; Plott 1967, or Demange 1983 for a survey on these issues). In other words, restrictions on the distribution of preferences within the society are necessary. Although interesting, these restrictions are so strong that they are most likely to fail. Furthermore, they are not robust to a change in the pref- erences of a single individual: if the restrictions are satisfied, a single change typically leads to cycles, thereby precluding a general prediction of the **majority** mechanism. As a result, the strategy-proofness of the **majority** **rule** has no meaning since the **majority** choice is not well-defined for most of the profiles.

En savoir plus
Abstract. Consider a two dimensional lattice with the von Neumann neighborhood such that each site has a value belonging to {0, 1} which changes state following a freezing non-strict **majority** **rule**, i.e., sites at state 1 remain unchanged and those at 0 change iff two or more of it neighbors are at state 1. We study the complexity of the decision problem consisting in to decide whether an arbitrary site initially in state 0 will change to state 1. We show that the problem in the class NC proving a characterization of the maximal sets of stable sites as the tri-connected components.

En savoir plus
These results are consistent with empirical and experimental evidence that committee decisions typically involve quali ed, rather than simple, majorities. For example, the voting records of the committees of the Bank of England, the Riksbank and the Federal Reserve show that split decisions are extremely infrequent. The experimental study by Blinder and Morgan (2005) nds than even though their arti cial monetary committee is supposed to make decisions by **majority** **rule**, in reality most decisions are unanimous. Experimental runs of the divide-the-dollar game show that despite the simple-**majority** requirement necessary to pass a proposal, the agenda setter does not always select a minimum winning coalition: in some cases (roughly 30 to 40 percent of the experiments in McKelvey, 1991 and Diermeier and Morton, 2005), agenda setters allocate money to all players.

En savoir plus
Keywords and phrases distributed voting, **majority** **rule**
Digital Object Identifier 10.4230/LIPIcs.MFCS.2016.55
1 Introduction
Distributed voting is a fundamental problem in distributed computing. We are given a network of players modeled as a graph. Each player in the network starts with one initial opinion out of a set of possible opinions. Then the voting process runs either synchronously in discrete rounds or asynchronously according to some activation mechanism. During these rounds in the synchronous case, or upon activation in the asynchronous case, the players are allowed to communicate with their direct neighbors in the network with the main goal to eventually agree on one of the initial opinions. If all nodes agree on one opinion, we say this

En savoir plus
may become applicable. For example, if the **rule** concludes y ≈ 3x, then the quality of the **rule** could depend on the difference ∣y − 3x∣. If we know that x follows certain probability distribution, x ∼ F(µ, σ), then the confidence could be the probability that our predic- tions were drawn from such a distribution, i.e., P (y ∣ y = 3x ∧ x ∼ F(µ; σ)). While such measures abound, the difficulty is to make numeric measures interoperable with clas- sical measures. For example, if a **rule** mining algorithm produces both numerical and non-numerical rules, and if numerical rules have an error measure but no confidence, then it is unclear how these rules can be ranked together. We propose to replace a constraint of the form y ≈ φ by y>φ− ∧ y<φ+ for a suitable error margin . If the constraint appears in the head, the **rule** has to be split into two rules that each have one of the conjuncts in the head (to still meet the language of Horn rules). The value of depends on the domain of y. If the domain of y operates under a value scale, i.e., scales for which ratios are not defined (such as temperatures or geographic coordinates), can be defined as an absolute error. This requires the choice of an appropriate numerical constant, which depends, e.g., on the unit system that y uses. An absolute error of 2 units for predicting temperatures in Celsius may be too loose if the scale is changed to Fahrenheit or if we are trying to predict latitudes. If y operates under a ratio scale (as is the case for populations or distances), we can use a relative error = αy, α ∈ (0, 1). Then, α is independent of the unit system and can be set to a default value.

En savoir plus
184 En savoir plus

- “K 16 : knowledge relative to material of filter support” provided by one decision maker.. We have applied the DOMLEM algorithm, proposed in DRSA method to infer rules permitting to[r]

Figure 1: QoS information of layers
This issue led us to propose an approach for implementing a QoS mapping **rule** builder which is responsible for generating mapping rules from QoS information coming from different sources. The working principle of the QoS mapping **rule** builder consists in mining the statistical data and configuration files containing working states and configurations of each component of the system, in order to produce association rules describing the QoS relationships between them. Two main techniques are used for this task: i) classification and prediction of user QoS requirement, based on client-side configuration, and ii) clustering system runtime information. The rest of this paper is organized as follows: Section 2 reviews the QoS mapping activity and presents a typology of mapping rules. In Section 3 we present the monitoring tools used for collecting statistical data and discuss the ability of using statistical data to generate QoS mapping rules. In section 4 we detail the data mining techniques used for this goal. Section 5 concludes and presents future work.

En savoir plus
their results to the m-dimensional grid [n] m for m ≥ (log log n) 2 (log log log n). Also, Stef´ ansson and Vallier [41] studied the non-strict **majority** model for the random graph G (n, p). (Note that, since G (n, p) is not a regular graph, this process cannot be formulated in terms of ordinary bootstrap percolation). For the strict **majority** case, we first state a consequence of the work of Balogh and Pittel [15] on random d-regular graphs mentioned earlier. Let G n,d denote a graph chosen uniformly

The intermediate normal cone leads to a tighter marginal pricing **rule**. But it remains to know if it is the tightest or not. The following computation shows that the answer is no since by considering another definition of the normal cone through an interior approximation, we get a different normal cone. In our example, the primary definition gives the smallest normal cone. But, if we reverse the production set as it is explained below, the contrary holds true. Thus, we have provided an improvement by introducing this new definition of the marginal pricing **rule**, but the question to define the “best” normal cone compatible with the existence of an equilibrium is still open. Note that the question is irrelevant with the Clarke’s normal cone since for the epi-lipschitzian set, it is known that the normal cone coincides with the opposite of the normal cone of the closure of the complementary.

En savoir plus
3.3.2 Complexity results for the **majority** problem
We recall that PP is the set of languages accepted by a probabilistic polynomial-time Turing Machine with an error probability of less than 1 /2 for each instance, i.e., a word in the language is accepted with probability at least 1 /2, and a word not in the language is accepted with probability less than 1/2. Al- ternatively, one can define PP as the set of languages accepted by a non-deterministic Turing Machine where the acceptance condition is that a **majority** of paths are accepting. Notably, PP contains both NP and coNP, as well as C = P. Also, PP is closed uner finite intersection. A natural PP-complete problem is MAJSAT: is a boolean CNF formula satisfied for at least half of its valuations:

En savoir plus
A logic for reasoning about data provided by several in- formation source has been presented. It has been proved that the underlying merging operator it axiomatizes be- longs to the class of **majority** merging operators dened by Konieczny and Pino-Perez. The axiomatisation has been proved to be sound and complete for some kind of interesting formulas only in the case when the informa- tion sources are sets of literals. This is a rst step to- wards the denition of a logic of merging but this must be extended at least to the case when information sources are sets of clauses. For doing so, Lin and Mendelzon's work LM98] provides us with a starting point since in this paper, a method for merging databases with disjonc- tive data is presented.

En savoir plus
La clause de verrouillage numérique a été contestée par de nombreux témoins qui ont comparu devant le comité parlementaire chargé d’examiner le projet de loi durant la dernière législatu[r]

Many other **rule**-based methods were proposed afterwards. Among them, the light S stemming only deals with plural forms, and so presents little compression power. On the other hand, (Paice, 1990) proposed a strong algorithm that applies successive deletion and replacement rules. (Jivani et al., 2011) argues that this method often presents a high over-stemming error. (Dawson, 1974) implemented an adaptation of Lovins method with much more suffixes and rules, which unfortunately makes the algorithm hardly reusable (Jivani et al., 2011). (Krovetz, 1993) proposes an approach based on inflections and derivations, with its Krovetz stemmer, KSTEM. It relies on an inflection-free lexicon to withdraw inflections, before analyzing the derivations — variants that change the grammatical nature of words. The author acknowledges that the performance strongly depends on the chosen lexicon. We even add that finding such an exhaustive lexicon might not be feasible for rare language or dialect.

En savoir plus
To conclude the section, we remark that the evidentiary standard for establishing negligence can be given an interesting interpretation. Suppose courts view the e¢ cient p b as re‡ecting the legal due care standard, i.e., the minimum level of precautions an injurer should have taken to escape a ruling of negligence. From the above argument, we know that the court should **rule** that less than due care was exerted if the evidence satis…es x < x( p). Now, b consider an outsider who does not know the detailed evidence but is informed of the court’s decision. For this outsider, and using standard statistical ter- minology, p (1 F (x( p); p)) b is the likelihood of care level p knowing that an accident occurred and that the injurer was not found negligent. Thus, the ev- identiary standard is e¢ cient if the outsider’s maximum likelihood estimate of p is then precisely b p. Moreover, the evidence is su¢ ciently informative for the negligence **rule** to implement p b if, and only if, such an evidentiary standard exists.

En savoir plus