non-expected utility theory

Top PDF non-expected utility theory:

Non-classical expected utility theory with application to type indeterminacy

Non-classical expected utility theory with application to type indeterminacy

single measurement, but is interested in a sequence of measurements or decision problems. If our measurements do not change the state of the object, one can use Savage’s classical paradigm. Recently a few decision-theoretical papers appeared (see for example, [11, 7, 9, 8]) in which the standard expected utility theory was transposed into Hilbert space model. Lehrer and Shmaya write “We adopt a similar approach and apply it to the quantum framework... While classical probability is defined over subsets (events) of a state space, quantum probability is defined over subspaces of a Hilbert space." Gyntelberg and Hansen (2004) apply a general event-lattice theory (with axioms that resemble those of von Neumann and Morgenstern) to a similar framework. One could expect that Gyntelberg and
En savoir plus

28 En savoir plus

Measurement system design for civil infrastructure using expected utility

Measurement system design for civil infrastructure using expected utility

consequences of such choices are not known is the use of utility theory and probability theory. When making decision under risk, the possible outcomes may be evaluated through an utility and a probability of occurrence. As a result, ”the decision maker [. . . ] cannot guarantee that the outcome will be as good as he might hope for, but he has made the best decision he can, based on his preferences and available knowledge”

23 En savoir plus

Can organisational ambidexterity kill innovation? A case for non-expected utility decision making

Can organisational ambidexterity kill innovation? A case for non-expected utility decision making

LE GLATIN, Mario, LE MASSON, Pascal, WEIL, Benoît Abstract The academic construction of ambidexterity articulated around notions such as exploration, exploitation (J. March 1991) has been flourishing over the years with a strong background in organisational theory to explain levels of performance and innovation. However, they have also made a call for in-depth studies to understand managerial capabilities such as decision- making (Birkinshaw & Gupta 2013; O’Reilly & Tushman 2013; Benner & Tushman 2015) supporting the tension of competing objectives. In this paper, we show that organisational ambidexterity can kill innovation as the underlying decision theories are not fully supporting the nature of decision required in regimes such as contextual ambidexterity (Gibson & Birkinshaw 2004). Two case studies from the aircraft cabin equipment industry are presented and analysed at the project management level with descriptors from organisational ambidexterity and decision-making. We propose to consider unconventional decision theories, taking into account non-expected utilities such as potential regret of imagined prospects, as a means to support management tools enabling ambidexterity at the decisional and contextual levels. First, we show that common decision models based on expected utility encoded in management tools mobilised for contextual ambidexterity can fail to support innovation. Second, we propose that a non-expected utility, such as potential regret of imagined prospects, serves the management of competing exploration/exploitation objectives. Third, the case studies help contouring a management tool extending observed attempts to sustain or extend contextual ambidexterity through unconventional decision-making.
En savoir plus

38 En savoir plus

Dynamic Consistency and Expected Utility with State Ambiguity

Dynamic Consistency and Expected Utility with State Ambiguity

Parsimony is however quite restrictive and ends up transforming the Ellsberg choices into a paradox for seu: if the dm were to conform to the Savage theory in the parsimonious sense, then her subjective probability measure would apply to observations and put more weight on red than on blue and, simultaneously, more weight on not-red than on not- blue. This pattern can not be supported by any standard probability measure, hence the paradox. Ellsberg constructs this thought experiment to argue that seu forbids confidence in probability judgements to affect observable behavior. However, following the intuitions of the introductory prevention example, it is not clear whether this impossibility is really due to seu or to its parsimonious interpretation. Put differently, the Ellsberg paradox, which is usually analyzed as a failure of the dm to conform to seu or a failure of seu to describe accurately her behavior, might also be thought as a failure of the parsimony rule. In this perspective, the fact that the probabilities of observable events {b} and {r, g} can not sum up to 1, which motivates the introduction of non-additive probabilities in models of ambiguity as in Schmeidler (1989), may now be rationalized through an ambiguous state of nature ω ∈ Ω that encodes simultaneously both observable events {b} and {r, g}. Parsimony rule vs. state ambiguity: the dynamic case. The dynamic exten- sion of the Ellsberg choices is reputed to generate an inconsistency in the dm’s behavior, which remains an important obstacle to certain economic applications of ambiguity (see Machina, 1989; Epstein and Le Breton, 1993; Wakker, 1997; Hanany and Klibanoff, 2007; or Al-Najjar and Weinstein, 2009). Epistemic rationalizations of behavior through state ambiguity might shed, in return, some light on this issue and lead to dynamically consis- tent reformulations of the Ellsberg choices. Consider the following dynamic version of the Ellsberg decision situations. The dm knows ex ante that she is about to be told whether the ball drawn from the urn is green or not. In the former case, there is no real decision to make while, in the latter case, she must choose whether to bet on red or to bet on blue. Under reduction and consequentialism 1 , optimal behavior is fully determined by the
En savoir plus

26 En savoir plus

Second-Order Beliefs and Second-Order Expected Utility

Second-Order Beliefs and Second-Order Expected Utility

for some utility function u : Z → R. In order to accommodate subjective uncertainty, Anscombe and Aumann (1963) introduce a finite set S of states of nature. An act (or horse lottery) is then defined as a function h : S → L(Z) from states of nature to objective lotteries on prizes in Z. The set of acts is denoted by H. Anscombe and Aumann (1963) assumed that the decision maker can not only rank objective lotteries (elements of L(Z)), but can also rank lotteries on acts (elements of L(H)). His preference ordering  on L(H) is assumed to extend the preference ordering  ca on L(Z) by identifying constant acts with objective lotteries. The preference  is also assumed to satisfy the axioms of expected utility theory and can then be numerically represented by a functional
En savoir plus

27 En savoir plus

Optimal tax policy and expected longevity: A mean and variance utility approach

Optimal tax policy and expected longevity: A mean and variance utility approach

So as to account for individuals’ attitude toward risk on longevity, we model in- dividual preferences using a ‘mean and variance’ utility function, and we assume that individuals have di¤erent sensitivities to the variance of lifetime welfare. 10 As this is well-known since Bommier’s (2005) work, there exist two broad ways to depart from net risk-neutrality with respect to the length of life. One way is to relax additive life- time welfare, as in Bommier’s (2005) works; the alternative solution is to relax the expected utility hypothesis. The former approach has the advantage to keep on relying on the - convenient - expected utility theory, but su¤ers from a lack of intuition behind non-additive lifetime welfare. This is why, in this paper, we prefer to keep additive lifetime welfare but to relax the expected utility hypothesis. Thus, lifetime welfare is still assumed to be additive in temporal welfare (without pure time preferences), but the expected utility hypothesis is here replaced by a less restrictive postulate.
En savoir plus

36 En savoir plus

Monte Carlo Algorithms for Expected Utility Estimation in Dynamic Purchasing

Monte Carlo Algorithms for Expected Utility Estimation in Dynamic Purchasing

the estimate computed for a choice is equal to its expected utility. This is the case since this decision-making process is exactly what is simulated by the classification tree method. However, this is not likely to be the way decisions are ultimately made. It does not make sense to use previously constructed classification trees if new information has been obtained in the interim. In fact, any time any new information is obtained (e.g. prices become known, new prequotes are received, other quotes expire), purchase procedure trees and their corresponding QR-trees should be reconstructed. If this is done, however, the classification tree method is not guaranteed to provide an underestimate. It is possible that when attempting to solve some decision point d in the purchase procedure tree, even though more information may be available at d’s decision time than when the classification trees were initially determined, the decision- maker may make a “mistake” that was not made during the initial simulation. Here, a mistake refers to the act of opting for the choice with the lower true expected utility. A possibility such as this could, in theory, result in the buyer’s true expected utility actually being less than that originally estimated.
En savoir plus

193 En savoir plus

Facts, Norms and Expected Utility Functions

Facts, Norms and Expected Utility Functions

This approach to normativity stems from the Aristotelian tradition, in which the opinion of the wisest (ta endoxa) is often considered the best possible premise for practical reasoning and should be adopted as such in our moral deliberations (Vega 1998). Kant excluded these sort of prudential concerns from the realm of ethics in favour of more general, inconditional, imperatives. Yet, the justification of EUF as prescriptive rules has drawn on this Aristotelian strategy since their very inception until our days. The purpose of this paper is to show how this argumentative pattern works in three different cases, to illustrate its cogency and scope. In §1, we will briefly discuss how EUF were originally conceived as a wise decision rule that solved the normative dilemma posed by the St. Petersburg paradox. In §2, we will analyze how Condorcet contested this solution since it failed to capture the criterion actually applied by the most expert decision-makers. In §3, we will see that precisely this objection recurred when Allais developed his paradox against the American School and how he defended his own theory as a better approximation to the expert’s criterion. In view of all this, we will briefly discuss by way of conclusion to what extent this strategy is still defensible to ground any prescriptive rule.
En savoir plus

15 En savoir plus

Considering Expected Utility of Future Bidding Options in Bundle Purchasing with Multiple Auction

Considering Expected Utility of Future Bidding Options in Bundle Purchasing with Multiple Auction

limits of classical auction theory, and have been moving into the domain of computer science. As discussed previously, the idea of using dynamic programming to make bidding de- cisions was used by Boutilier et al. [2, 3] to determine how much to bid where the products were auctioned in sequence. Those authors also considered the problem of purchasing bundles, however in their model no auctions overlapped. In order to formulate distributions of the bid outcomes, they also examine multi-round auctioning where bid distributions can be learned over time. Byde [6] used dynamic program- ming to make bidding decisions in simultaneous auctions where the bidder is only interested in obtaining a single product. Byde et al. [7] analyzed the problem of determin- ing the optimal set of auctions for purchasing multiple units of a single product, and Preist et al. [12] gave a method that determines the optimal set of auctions when bundles of products are needed, and simultaneous bidding in multiple auctions is permitted.
En savoir plus

10 En savoir plus

More pessimism than greediness: a characterization of monotone risk aversion in the Rank-Dependent Expected Utility model

More pessimism than greediness: a characterization of monotone risk aversion in the Rank-Dependent Expected Utility model

The notion of monotone risk aversion is model-free; it has been proved to be useful in EU (see Section 2.3) and is well ¯tted to RDEU theory (see [11, 20, 28, 32]), where comonotonicity plays a fundamental part at the axiomatic level. The above analysis restricted u to the case of concave functions. Consistent with Allais' criticism, it is of interest to study whether a decision maker can be averse to risk without u being concave. In a previous paper, Chateauneuf & Cohen [7] proved that pessimism of f is a necessary condition for weak risk aversion, while concavity of u is not, but did not succeed to fully characterize weak risk aversion. This is the subject of our current research, by which we know that when non-concave u are allowed, weak risk aversion does not imply monotone risk aversion.
En savoir plus

24 En savoir plus

View of The utility of queer theory in reconceptualising ageing in HIV-positive gay men

View of The utility of queer theory in reconceptualising ageing in HIV-positive gay men

Ageing and HIV Medical ageing with HIV Since the advent of highly active anti-retroviral therapy (HAART) it has become possible for many in the West to live with HIV rather than await one’s likely death from AIDS. However, this has had the knock on implication of the HIV positive population ageing. For example in 2010 in the United States 35% of all people living with an HIV diagnosis were over 50,[33] this is expected to rise to 50% by 2015.[34] A similar trend can be seen in the UK[35] and any other country where HAART is becoming common place and widely accessible.[36] It has been regularly stated in the medical literature[37] that HAART allows HIV positive individuals to achieve a ‘near normal’ life expectancy; with ‘near normal’ meaning that individuals diagnosed with HIV live on average seven to ten years less than their HIV negative counterparts[38] and that many of the problems associated with ‘old age’ occur at a younger chronological age for HIV positive individuals, including: liver cirrhosis, renal disease, heart disease, neurological complications, immuno- senncense (weaker immune system), osteoporosis, muscle mass/fat distribution changes, and microbial translocation in the gut (leaky gut).[39] As one can see the list covers almost the entire body and in the medical literature (as with the sociology studies) this has been referred to as ‘accelerated ageing’. However, despite the apparent consensus on the lower than average life expectancy uncertainties exist around the possible mechanisms for the medical ‘accelerated ageing’ exists.[40] Currently in the literature particular prominence is given to the theory that chronic inflammation via long term activation of the immune system has overarching damaging effects to an HIV positive body.[41] Hence it has been argued that in order to incorporate this new understanding of the mechanisms of HIV and ageing there needs to be a move away from focussing solely on the amount of virus in a patient’s blood (normally termed viral load, with success being deemed once levels become undetectable) and CD4 cell count (the main immune cells that HIV infects and
En savoir plus

12 En savoir plus

Dynamic consistency of expected utility under non-classical(quantum) uncertainty

Dynamic consistency of expected utility under non-classical(quantum) uncertainty

A most interesting result is that the von Neumann-L¨ uders postulate which is central to Quantum Mechanics and informs about the impact of a measurement on the state of a system can be derived from a consistency requirement on choice behavior. When the belief-state (cognitive state) is updated according to the postulate, the agent conditional preferences reflect a single preference order. In order to establish that result we had to confine ourselves to a restricted class of preferences i.e., those satisfying our axioms. This restriction is needed because the concept of conditional lottery is not well-defined for general quantum lotteries. It is however well-defined for Hermitian operators which rep- resent quantum lotteries satisfying our axioms. Interestingly, we find that in contrast with classical subjective expected utility theory, the dynamic consistency of preferences does not entail the so-called recursive dynmamic consistency. This distinction is an expression of the fundamental distinction between the two settings namely that the resolution of uncertainty depends on the operation(s) performed to resolve it.
En savoir plus

37 En savoir plus

Conditional Expected Utility

Conditional Expected Utility

The paper unfolds as follows. The formal setting is described in Section 2. In Section 3, we face the conceptual problem of extending the meaning of conditional expected utility outside the realm of SEU theory and Bayesian updating. De…nition 1 of Section 3 gives our solution. We conclude that section by giving an example of a preference relation of Bewley’s type (thus not SEU) which satis…es the criteria of our de…nition. In Section 4, we fully characterize those Conditional EU preferences which are, in addition, monotone and C-independent. In Section 5, we present a set of results which parallel those of Fishburn [6, pp. 19-23] and that, in fact, have a broader range of applicability. We discuss the relation between our assumptions and those of Fishburn [6, pp. 19-23] in Section 6.
En savoir plus

19 En savoir plus

Expected utility without full transitivity

Expected utility without full transitivity

something resembling the following rankings of specific probability distributions. Consider the distributions p = (0, 1, 0) and q = (0.1, 0.89, 0.01) on the one hand, and the pair of distributions p 0 = (0, 0.11, 0.89) and q 0 = (0.1, 0, 0.9) on the other. Many subjects appear to rank p as being better than q and q 0 as being better than p 0 . This is inconsistent with classical expected utility theory because, for any function U : X → [0, 1] such that U (5) = 1 and U (0) = 0, p  q entails U (1) > 0.1 + 0.89 U (1) and, thus, U (1) > 1/11, whereas q 0  p 0 implies 0.1 > 0.11 U (1) and, thus, U (1) < 1/11. If, however, a generalized expected-utility criterion that allows for incompleteness is employed and the subjects are given the op- tion of treating two distributions as non-comparable, it may very well be the case that, according to the decisions of some experimental subjects, p is ranked higher than q and p 0 and q 0 are non-comparable (or p and q are non-comparable and q 0 is preferred to p 0 , or both pairs are non-comparable). Of course, the so-called paradox persists if the original rankings are retained even in the presence of a non-comparability option. But it seems to us that the use of an incomplete generalized expected-utility criterion may considerably reduce the instances of dramatically conflicting pairwise rankings without abandoning the core principles of expected-utility theory altogether.
En savoir plus

14 En savoir plus

Non-additivity in Accounting Valuation: Theory and Applications

Non-additivity in Accounting Valuation: Theory and Applications

_ , , _ , (13b) where EV i,t is the current enterprise value for firm-year observation i at the end of fiscal year t and _ , _ , is the predicted enterprise value given the estimated Choquet capacities and firm-year observation i’s assets for a large (small) type of firm-year observation. Efficiency captures the market assessment of each firm-year observation’s specific productive efficiency relative to the average ability of firms in the industry to combine assets to generate economic benefits. This efficiency is likely to be persistent, because learning how to combine assets efficiently to generate economic benefits takes time. Lev et al. (2009) document that organization capital is associated with five years of future operating and financial performance. They refer to Evenson and Westphal (1995, p. 2237) to define organization capital as “the knowledge used to combine human skills and physical capital into systems for producing and delivering want-satisfying products”. We reason that our measure of productive efficiency based on the non-additive approach is likely to be correlated with organization capital, i.e., the ability to “convert resources into outputs” resulting in “super- normal performance” (Lev et al., 2009). Therefore, productive efficiency should also be positively associated with future realized operating performance.
En savoir plus

43 En savoir plus

Conditional marginal expected shortfall

Conditional marginal expected shortfall

adaptive ¯ k-values are obtained by plotting the estimates as a function of ¯ k whereafter the ¯ k is selected by a stability criterion as described in Goegebeur et al. (2019). In Figure 10 we show the approximate pointwise 95% confidence intervals for θ k{n with k{n “ 1% (left) and k{n “ 10% (right) as a function of time, at the above considered location. Note that the confidence intervals seem reasonable, and are, e.g., wider for θ k{n with k{n “ 1% than for k{n “ 10%, as expected. At a few x 0 positions we could not obtain a confidence interval, either due to a negative estimate
En savoir plus

51 En savoir plus

Non-wellfounded proof theory for (Kleene+action)(algebras+lattices)

Non-wellfounded proof theory for (Kleene+action)(algebras+lattices)

Kleene algebra In our sequent system, called LKA, proofs are finitely branching, but possibly infinitely deep (i.e. not wellfounded). To prevent fallacious reasoning, we give a simple validity criterion for proofs with cut, and prove that the corresponding system admits cut-elimination. The difficulty in the presence of infinitely deep proofs consists in proving that cut-elimination is productive; we do so by using the natural interpretation of regular expressions as data types for parse-trees [ 15 ], and by giving an interpretation of proofs as parse-tree transformers. Such an idea already appears in [ 18 ] but in a simpler setting, for a finitary natural deduction system rather than for a non-wellfounded sequent calculus.
En savoir plus

31 En savoir plus

Infinets: The parallel syntax for non-wellfounded proof-theory

Infinets: The parallel syntax for non-wellfounded proof-theory

sult, the logical correctness of circular proofs becomes non-local, much in the spirit of correctness criteria for proof-nets [19, 13]. However the structure of non-wellfounded proofs has to be further investi- gated: the present work stems from the observation of a discrepancy between the sequential nature of sequent proofs and the parallel structure of threads. An immediate consequence is that various proof attempts may have the exact same threading structure but differ in the order of inference rule applications; more- over, cut-elimination is known to fail with more expressive thread conditions. This paper proposes a theory of proof-nets for µMLL ∞ non-wellfounded proofs. Organization of the paper. In Section 2, we recall the necessary background from [3] on linear logic with least and greatest fixed points and its non-wellfounded proofs, we only present the unit-free multiplicative setting which is the frame- work in which we will define our proof-nets. In Section 3 we adapt Curien’s proof-nets [11] to a very simple extension of MLL, µMLL ∗ , in which fixed-points inferences are unfoldings and only wellfounded proofs are allowed; this allows us to set the first definitions of proof-nets and extend correctness criterion, se- quentialization and cut-elimination to this setting but most importantly it sets the proof-net formalism that will be used for the extension to non-wellfounded derivations. Infinets are introduced in Section 4 as an extension of the µMLL ∗ proof-nets of the previous section. A correctness criterion is defined in Section 5 which is shown to be sound (every proof-nets obtained from a sequent (pre-)proof is correct). The completeness of the criterion (i.e. sequentialization theorem) is addressed in Section 6. We quotient proofs differing in the order of rule appli- cation in Section 7 and give a partial cut elimination result in Section 8. We conclude in Section 9 and comment on related works and future directions. Notation. For any sequence S, let Inf(S) be the terms of S that appears infinitely often in S. Given a finite alphabet Σ, Σ ∗ and Σ ω are the set of finite and infinite
En savoir plus

20 En savoir plus

Non-abelian Hodge theory and some specializations

Non-abelian Hodge theory and some specializations

(3) La catégorie des fibrés λ-plats polystables de rang r et de classes de Chern nulles; ils sont reliés par des métriques de pluri-harmoniques, ils sont donc équivalents à la catégorie des fibrés harmoniques de rang r. Il existe beaucoup de généralisations de ce correspondence, une généralisation naturelle con- sidère les variétés non compactes comme des variétés de base, la correspondance qui en résulte est dû à Simpson [ Sim90 ], Biquard [ Biq97 ], Jost–Zuo [ JZ97 ], Mochizuki [ Moc06 , Moc09 ] et autres. D’autres généralisations telles que considérer les groupes de Lie réels comme des groupes de struc- ture, ou considérer les corps de caractéristique positive, les corps p-adiques comme des corps de base [ BGPiR03 , GPGiR09 , OV07 , Fal05 , AGT16 ]. Nous ne prétendons pas donner plus de détails à ces sujets ici.
En savoir plus

192 En savoir plus

Convex resource theory of non-Gaussianity

Convex resource theory of non-Gaussianity

(Received 19 April 2018; published 25 June 2018) Continuous-variable systems realized in quantum optics play a major role in quantum information processing, and it is also one of the promising candidates for a scalable quantum computer. We introduce a resource theory for continuous-variable systems relevant to universal quantum computation. In our theory, easily implementable operations—Gaussian operations combined with feed-forward—are chosen to be the free operations, making the convex hull of the Gaussian states the natural free states. Since our free operations and free states cannot perform universal quantum computation, genuine non-Gaussian states—states not in the convex hull of Gaussian states—are the necessary resource states for universal quantum computation together with free operations. We introduce a monotone to quantify the genuine non-Gaussianity of resource states, in analogy to the stabilizer theory. A direct application of our resource theory is to bound the conversion rate between genuine non-Gaussian states. Finally, we give a protocol that probabilistically distills genuine non-Gaussianity—increases the genuine non-Gaussianity of resource states—only using free operations and postselection on Gaussian measurements, where our theory gives an upper bound for the distillation rate. In particular, the same protocol allows the distillation of cubic phase states, which enable universal quantum computation when combined with free operations. DOI: 10.1103/PhysRevA.97.062337
En savoir plus

15 En savoir plus

Show all 10000 documents...