Game Theory and Consequentialism
 “Charity: Altruism or Cooperative Egoism?” in E.S. Phelps (ed.), Altruism, Morality, and Economic Theory (New York: Russell Sage Foundation, 1975), pp. 115–131.
PDF copy (to be read onscreen or printed in LANDSCAPE mode)
 The Core and Equilibrium through the LookingGlass Australian Economic Papers 16 (1977), 211–218;
 Voluntary Contracts and Jam in the Far Future Australian Economic Papers 17 (1978), 363–364.
Abstract: In a game in extensive form, a natural subgame arises at each node of the game tree, with the players’ strategies restricted to ensure that node is attained. A solution to the game may not be consistent with the solutions to these subgames. The core of the simplest possible twoperiod exchange economy is not consistent with the core of the secondperiod economy. Considering just equilibria of the normal form of the game, as the core does, neglects this aspect of a dynamic game, which arises whenever players cannot (unlike the White Queen in Through the Looking Glass) “remember” the future as well as the past.

Aspects of Rationalizable Behavior in K. Binmore, A.P. Kirman, and P. Tani (eds.) Frontiers of Game Theory (Cambridge, Mass.: M.I.T Press, 1993), ch. 14, pp. 277–305.
Abstract: Equilibria in games involve common “rational” expectations, which are supposed to be endogenous. Apart from being more plausible, and requiring less common knowledge, rationalizable strategies may be better able than equilibria to capture the essential intuition behind both correlated strategies and forward induction. A version of Pearce’s “cautious” rationalizability allowing correlation between other players’ strategies is, moreover, equivalent to an iterated procedure for removing all strictly, and some weakly dominated strategies. Finally, as the effect of forward induction in subgames helps to show, the usual description of a normal form game may be seriously inadequate, since other considerations may render implausible some otherwise rationalizable strategies.
PDF file of preprint

Elementary NonArchimedean Representations of Probability for Decision Theory and Games in P. Humphreys (ed.) Patrick Suppes: Scientific Philosopher, Vol. I: Probability and Probabilistic Causality (Kluwer Academic Publishers, 1994), ch. 2, pp. 25–59.
Abstract: A fundamental problem in extensive form game theory is that, in order to tell whether a player has a better strategy than in a presumed equilibrium, one must know the other players’ equilibrium reactions to a counterfactual deviation with prior probability zero. Past work by Selten and Myerson has considered “tremblinghand” strategies. Here one particular space of “extended” probabilities is proposed and characterized in the following four equivalent ways: (i) as complete conditional probability systems considered by Rényi, Myerson, and others; (ii) as lexicographic hierarchies of probabilities considered by Blume, Brandenburger and Dekel; (iii) as extended logarithmic likelihood ratios considered by McClennan; and (iv) as certain “canonical rational probability functions” which represent trembles directly. However, one wants to describe adequately the joint probability distributions determined by compound lotteries, and also to distinguish all pairs of probability distributions over the consequences of decisions that, according to the “consequentialist” axioms of decision theory, should not be indifferent. To achieve this, it is shown that an extension to general rational probability functions is needed.
PDF file of preprint

Consequentialism and Bayesian Rationality in Normal Form Games in W. Leinfellner and E. Köhler (eds.) Game Theory, Experience, Rationality. Foundations of Social Sciences, Economics and Ethics. In honor of John C. Harsanyi. (Vienna Circle Institute Yearbook 5) (Kluwer Academic Publishers, 1998), pp. 187–196.
Abstract: The consequentialist hypothesis requires the set of possible consequences of behaviour in any singleperson decision tree to depend only on the feasible set of consequences. This implies that behaviour reveals a consequence choice function. Previous work has applied this hypothesis to dynamically consistent behaviour in an (almost) unrestricted domain of finite decision trees. Provided that behaviour is continuous as objective probabilities vary, there is state independence, and also Anscombe and Aumann’s reversal of order axiom is satisfied, then behaviour must be “Bayesian rational” in the sense of maximizing subjective expected utility. Moreover, null events are excluded, so strictly positive subjective probabilities must be attached to all states of the world.
For agents playing a multiperson game, it is not immediately clear that singleperson decision theory can be applied because, as Mariotti (1996) has pointed out, changing the decision problem faced by any one player typically changes the entire game and so changes other players’ likely choices in the game. Nevertheless, by applying the consequentialist hypothesis to particular variations of any given game, the force of these objections can be considerably blunted. For any one player i, the variations involve a positive probability that the game changes to another in which player i faces an arbitrary finite decision tree. However, only player i knows whether the game has changed. Also, if the game does change, then only player i has any choice to make, and only player i is affected by whatever decision is taken. In effect, player i is then betting on the other players’ strategy choices in the original game. Moreover, there is no reason for these choices to change because, outside the original game, the other players have no reason to care what happens.
In this way, Bayesian rational behaviour in normal form games can be given a consequentialist justification. There is a need, however, to attach strictly positive probabilities to all other players’ strategies which are not ruled out as completely impossible and so irrelevant to the game. This suggests that strictly positive probabilities should be attached to all other players’ rationalizable strategies, at least — i.e., to all those that are not removed by iterative deletion of strictly dominated strategies.
PDF file of preprint

Consequentialism, NonArchimedean Probabilities, and Lexicographic Expected Utility in C. Bicchieri, R. Jeffrey and B. Skyrms (eds.) The Logic of Strategy (Oxford University Press, 1999), ch. 2, pp. 39–66.
Abstract: Earlier work (Hammond, 1988a, b) on dynamically consistent “consequentialist” behaviour in decision trees was unable to treat zero probability events satisfactorily. Here the rational probability functions considered in Hammond (1994), as well as other nonArchimedean probabilities, are incorporated into decision trees. As before, the consequentialist axioms imply the existence of a preference ordering satisfying independence. In the case of rational probability functions, those axioms, together with continuity and a new refinement assumption, imply the maximization of a somewhat novel lexicographic expected utility preference relation. This is equivalent to maximization of expected utility in the ordering of the relevant nonArchimedean field.
PDF file of preprint

NonArchimedean Subjective Probabilities in Decision Theory and Games Stanford University Department of Economics Working Paper No. 97038; abbreviated version published in Mathematical Social Sciences 38 (1999), 139–156.
Abstract: To allow conditioning on counterfactual events, zero probabilities can be replaced by infinitesimal probabilities that range over a nonArchimedean ordered field. This paper considers a suitable minimal field that is a complete metric space. Axioms similar to those in Anscombe and Aumann (1963) and in Blume, Brandenburger and Dekel (1991) are used to characterize preferences which: (i) reveal unique nonArchimedean subjective probabilities within the field; and (ii) can be represented by the nonArchimedean subjective expected value of any realvalued von Neumann–Morgenstern utility function in a unique cardinal equivalence class, using the natural ordering of the field.
PDF file of working paper

Expected Utility in NonCooperative Game Theory in S. Barberà, P.J. Hammond, and C. Seidl (eds.) Handbook of Utility Theory, Vol. 2: Extensions (Boston: Kluwer Academic Publishers, 2004) ch. 18, pp. 982–1063.
Abstract: This sequel to previous chapters on objective and subjective expected utility reviews conditions for players in a noncooperative game to be Bayesian rational — i.e., to choose a strategy maximizing the expectation of each von Neumann–Morgenstern utility function in a unique cardinal equivalence class. In classical Nash equilibrium theory, players’ mixed strategies involve objective probabilities. In the more recent rationalizability approach pioneered by Bernheim and Pearce, players’ possibly inconsistent beliefs about other players’ choices are described by unique subjective probabilities. So are their beliefs about other players’ beliefs, etc. Trembles, together with various notions of perfection and properness, are seen as motivated by the need to exclude zero probabilities from players’ decision trees. The work summarized here, however, leaves several foundational issues unsatisfactorily resolved.
PDF file of preprint

How Restrictive Are Information Partitions? (January 2005 revision).
Abstract: Recently, several game theorists have questioned whether information partitions are appropriate. Bacharach (2005) has considered in particular more general information patterns which may not even correspond to a knowledge operator. Such patterns arise when agents lack perfect discrimination, as in Luce’s (1956) example of intransitive indifference. Yet after extending the state space to include what the agent knows, a modified information partition can still be constructed in a straightforward manner. The required modification introduces an extra set of impossible states into the partition. This allows a natural represention of the agent’s knowledge that some extended states are impossible.
PDF file of preprint

“Time, the Surprise Examination, and Prisoner's Dilemma,” University of Essex, Economics Discussion Paper No. 45 (1972).

“Sophisticated Dynamic Equilibria for Extensive Games” (1982, to be revised).

Beyond Normal Form Invariance: First Mover Advantage in TwoStage Games with or without Predictable Cheap Talk (2006).
Abstract: Von Neumann (1928) not only introduced a fairly general version of the extensive form game concept. He also hypothesized that only the normal form was relevant to rational play. Yet even in Battle of the Sexes, this hypothesis seems contradicted by players' actual behaviour in experiments. Here a refined Nash equilibrium is proposed for games where one player moves first, and the only other player moves second without knowing the first move. The refinement relies on a tacit understanding of the only credible and straightforward perfect Bayesian equilibrium in a corresponding game allowing a predictable direct form of cheap talk.
PDF file of preprint