Skip to main content

ST222 Resources

Archived page academic year 2017-18

Lecture times & locations (provisional):

Tuesday 14:00-15:00 MS.01
Friday 12:00-13:00 ARTS-CINEMA
Friday 16:00-17:00 MS.01

Warwick tradition is to start 5 min past the hour and end 5 min before the hour, to give everybody the chance to get to different rooms between lectures. I'll try to do by best to keep to the end time, with a little bit of wiggle room sometimes.

Lecturer: Dr Julia Brettschneider

Syllabus

Introduction (first lecture(s)): Motivation & examples.
Part I (weeks 1-5): Normative decision theory: Concepts of (conditional) probability and interpretations (classical, axiomatic, geometric, frequentist, subjective), coherence, Dutch books, elicitation of subjective probabilities, expectation and prediction, normative theory of optimal decisions making under uncertainty: loss function, EMV, minimax, decision trees, utility, preferences, paradoxes (Allais, Ellsberg), limitations of these approaches.
Part II (weeks 6-7): Normative game theory: Rationality and common knowledge assumption, separability, dominance, Newcomb's paradox, mixed, optimal and worthwhile strategies, value of a game, zero-sum games, equilibrium, incomplete information.
Part III (weeks 8-10): Descriptive decision theory and modifications of normative models: Paradoxes, ambiguity, Kahneman & Tversky's school of heuristics and biases/fallacies, risk attitudes, risk perception, probability distortion functions, prospect theory, risk communication, mathematical/statistical modelling.

Exercises

There will be 5 exercise sheets posted here over the course of term 1. Solutions will be posted by beginning of term 2 to support your revision process.

Exercise sheet 1 Exercise sheet 1 with solutions

Exercise sheet 2 Exercise sheet 2 with solutions

Exercise sheet 3 Exercise sheet 3 with solutions

Exercise sheet 4 Exercise sheet 4 with solutions (3 typos in solutions to prob 3 corrected)

Exercise sheet 5 Exercise sheet 5 with solutions

Assessment and informal quizzes

The final module mark is based on 100% exam.

  • April Exam: more information about logistics is at the university's exam page with a specific data being announced (quite late in my experience) on this subpage.
  • Mock exam paper and mock exam paper solution.
  • Warwick past exam paper collection (includes previous years' ST222 exam papers).
  • The week 6 class test conducted in previous years has been discontinued for logistic resaons and because it had the side effect of given more weight to the first half of the module. Instead, we will have a three informal quizzes to give students an opportunity to monitor their learning in a more comprehensive way. They will be at the end of Part I, Part II and Part III. They will be posted here in Week 6, Week 8 and Week 11.
Revision and practicing for the exam
  • Exercise sheets are an excellent way to revise the material and to prepare for the exam.
  • The three informal quizzes posted over the course of term 1 are useful to test initial understanding while the material is still quite new to you. This can help you staying on top of the material during term. However, they are more superficial than exam questions.
  • Mock exam paper and mock exam paper solution.
  • Warwick past exam paper collection (includes previous years' ST222 exams).
Lecture notes and resources

Part I and II is largely based on lecture notes of its predecessor module (ST114), which will also be posted chapter by chapter in the relevant week below. The notes have been developped by lecturers who previously taught that module including Jim Smith, Wilfrid Kendall, Jim Griffith, Julia Brettschneider, Jane Hutton, Adam Johansen and Ben Graham.

Some of the examples and theory presented at the blackboard/white board go beyond these notes. Please take your own notes. If you can not make it to class, please ask a fellow student to share this with you.

For Part III slides and some additional reading material and links to resources will be posted in the relevant week below. For students who like to print slides beforehand, I will try to post preliminary versions of the slides the evening before class.

Weekly summaries (on this website - scroll down)

For each week, I provide the following material on this website:

  • short summaries of the lectures
  • links to resources
  • a bit of study advice
  • answered questions by/to students

Material relating to week n will be posted by Monday of week n+1 at the latest. If I am sometimes delayed to due illness or travelling, please be patient.

Content, aims and objectives

For the formal description of this module, go back to the main module page. In a nutshell, we want to understand decision making and games. The latter can be understood as two people taking decisions in turn. We will contrast optimal decisions (normative theory) with how real people make decisions in real life (descriptive theory). The latter takes into account the presense of uncertainty, emotions, ambiguity, incomplete information etc. Applications are manifold including questions about engineering, individual medical treatments, vaccination programs, insurance, financial investments, business strategies, urban planning, education policies and more.

History and motivation

This module is the answer to the request of the SSLC to re-introduce ST114. It has now become a second year module with an extended syllabus including more applications and descriptive theory. The latter has become increasingly relevant for a modern understanding of decision making behaviour in complex and real world situations for humans, that is, for homo sapiens as opposed to homo economicus (or robots, for that matter).

Relationships to other running modules
  • There is a small overlap with Mathematical Economics 1A. However, the latter focusses on games only, whereas ST222 includes only 2 weeks of game theory and devotes the remaining 8 week to subjective probability and decision theory. Another differenct is the emphasis. ST222 develops two contrasting alternatives theories of decision making: the normative and the descriptive approach. We introduce students to the philosophy and elicitation of subjective probabilities, to psychological perception of randomness and to behaviour under risk and uncertainty, which includes learning about studies conducted with people (including Warwick students). Finally, ST222 uses applications from a wide range of domains including medicine, sciences, engineering, finance, operational research and everyday life.
  • This modules also give a first taste of advanced Bayesian decision theory taught in the module ST301.
Prerequisites

ST111 or ST115 is required, because we are using an axiomatic approach to probability. In fact, ST222 provides a good opportunities to develop more intuition for the interpretation and application of probabilities, especially conditional ones.

Disclaimer

While we are hoping that you will benefit from taking this module both academically and personally, the Department of Statistics is not responsible for all decisions you take applying the methods you learned. For example, if you determine that it is not worth to revise before the exam, you can not appeal on grounds of expected utility theory. And if you select the wrong girlfriend/boyfriend, we do not have the capacities to find you a better one. (But we do encourage studying in teams and engaging in group grocery shopping to maximise the number of options in the first place.)


Weekly Summaries 2017


Week 10


Tuesday (L1):
Prospect theory (PT) has two key ingredients: the probability weighting function describes a distortion of probability empirically found in humans. The value function replaces the utility function in EUT, but stand for particular families of functions that emphasise the difference in reactions to losses and gains and the importance of a reference point. Constructing an expression similar to the expected value (but without the same mathematical properties, as they are lost due to probability transformation), we can again formulate decision rules by maximising the value of a prospect. We consider a a class of example experiments of the type seen in Allais' paradox. Common consequence and common ratio. Prospect theory can explain behaviour that could not be explained with EUT. We also discuss how PT can explain some other biases and give examples from the real world (framing effect, weather and stock market).

Friday noon (L2): Epistemological foundation of modelling. Scientific model aim to provide testable frameworks for theories derived from observations (data) in the real world, such as a street map. They can not replace the real world, so they are never perfect, but otherwise they would have to be just as complex, which defies their purpose. As Box said: "All models are wrong, some are useful." We look a the process of building models and at different types of data collections. As an example, we look at models for humans, in particular the one is used in economics, the homo economicus, and one used in architecture, Corbusier's l'homme moyen. Differences between three types of models are discussed: normative, descriptive and prescriptive.

Friday 4pm (L3): A closer look at how to build models in the context of PT includes the editing phase and the evaluation phase each consisting of various stages. At each stages, biases can enter and it can be checked how they could be avoided, e.g. through training. The 21st century with its information overload and fragmented assessment of information through social networks gives ample opportunity for this, e.g. confirmation bias and affect heuristics. We look at how PT explains behaviour in some real world examples: disposition effect, cab drivers, equity premium, buying & selling prices, insurance option, endowment effect.

Resources

Week 9

Tuesday (L1): Allais Paradox. The preferences hypothesised by Allais (and empirically validated) can not be explained by any utility function (we show that in the class with u(0)=0, exercise sheet 4 has the general case). The is a limitation of EUT. Availability heuristics and knowleged. An example involving paths through rectangular structures show that students (from Warwick) are likely to find the correct answer than non-maths major students from the original study by K&T. The Linda problem and its variations (involving another stereotypical character called Bill) demonstrate how people in studies by K&T give a higher priority to matching different parts of a story than to following the axioms of probability theory. They break the subset rule suggestiong that P(B) is smaller than P(B and F). ST222 students from past years behaved just like the people in those studies. This confusion is called conjunction fallacy and can be explained by mistaking likely with representative (or credible) and can be observed in other experiments, e.g. involving nested patterns in sequences coin tosses and forecasting future events.

Friday noon (L2): Heuristics. In this context, they are shortcuts, tool and approximations used in probability judgement, prediction and decision making under uncertainty. They can help people to come up with answers when time, resources or cognitive capabilties are compromiseed, but are prone to fallacies biases. Representativness is a philosophical concept that can help explaining the conjunction fallacy observed in the Linda problem. Her being both a bank teller and a feminist seems more representative of the kind of person we imagine than her being a bank teller only. Representativeness is correlated with frequency, but that strongly and it is not always correct to use it instead. Reason-based choice. People like to be able to justify their decisions by basing them on rationales. Epirical studies show they are willing to pay a price to postpone a decision until they have more information, even if they decision may be independent of that information (e.g. Hawaii vacation example). Base rate neglect. There is a tendency to ignore population proportions is estimating probabilities or events. This is very relevant for witness statements in the court room, where the likelihood a witness statement is true needs to be determined based on both the reliability of witness and the population proportions.

Friday 4pm (L3): Framing of contingencies. A study with three different preference choices between lotteries, two of these are mathematically equivalent, but one framed as a compound bet consisting of two stages. This leads to subjects making inconsistent choices. In studies, subjects typically make the same choices as they do in a set up that only consists of the second stage, a phenomenon called isolation effect. The preferences observed also give evidence for the certainty effect. Modeling. Mathematics is the language of science, the bridge between theory and observation (Hilbert). The objective of prospect theory (by T&K) is to provide a model that maps human behaviour in decision making based on empirical evidence, that is, including deviations from normative rules of probability. Probability weighting. The main mathematical basis for prospect theory is an S-shaped probability weighting function w capturing the underrating of probabilities close (but not equal) to 1, the overestimation of probabilities close (but not equal) to 0, and the lack of differentiation between medium size probabilities. While there is a continuum of suitable functions, we introduce two common ones (by T&K and by Prelec). Utility function. In the context of prospect theory utiliy functions have particular features including the emphasis on the choice of a reference point and the asymmetry between gains and losses. She shape of all these functions have been validated in lab experiments by Gonzales & Wu.

Resources Week 9:

Study advice for Week 9:

  • Some students wonder what exam questions will be like and are particularly unsure about Part III with respect to this. That is not surprising, because this part of the module is quite different from other modules in mathematics and statistics in that it concerns the modelling for real world phenomena (such as human behaviour) based on empirical evidence. Hence some of the task you are being asked in an exercise is to make such a model. This is a new aspect.
  • Try to get a feel for the heuristics and biases by identifying where they come into your own life. For exmple, estimate some frequencies and compare your estimates against the true numbers. I you got this very wrong, why could this be? Availabity bias? Anchor effect? Framing effect? Emotions? Base rate neglect? etc
  • Please tackle the questions on the newest (and last) Exercise Sheet 5.

Comments and questions for Week 9:

  • From more traditional maths modules we are used to models being derived from existing axioms and theorems. Sarting point is mathematical beauty, but we hope to capture some reality along the way. Genuine applied mathematics and probability works the other way around: it starts from observations and tried to find suitable mathematical framework.
  • Do you sometime postpone a decision because you are waiting for more information, but may not have checked if that information would have even impacted the outcome of our decision? For example, which compelling reason stops you from deciding to work on Exercise Sheet 5 right now?!
Week 8

Tuesday (L1): Random sequences. Students pretend to be coins being flipped and they keep record of what comes up, heads or tails. We are interested in distinguishing sequences generated by humans pretending to be coins and by humans being humans. How could this be done? A key difference turns out to be the number and length of runs. We derive a formula for the expected number of runs of length r in a sequence of N independed fair coin tosses. Using R, we produce tables for N=200 and N=400 with r varying. We compare the expected numbers to observed numbers in sequences generated by humans and by real coin flips and confirm that runs provide a suitable way to distinguish human generated sequences from randomly generated ones.

Friday noon (L2): Random sequences max run length. Derivation of an iterative formula for the distribution of the length of the longest run of heads. Perception. Human perception of randomness does not coincide with the probabilistic definition of randomness. A result of this is gambler's fallacy and cluster illusion (in 2d). We confirmed such phenomena in a study with UG students at Warwick. Mathematical degree students, however, show less of the human bias. The alternate too much when simulating random sequences of black and white and the vaste majority starts with black (due to being primed with that order).

Friday 4pm (L3): Anchoring bias. Experimental evidence showing that prior exposure to numerical information or even to seemingly unrelated lines of different length biases people numerical estimates. Sampling variation: In a repetition of Kahneman & Tversky's (K&T) traditional hospital birth question on Warwick Maths/Stats students we see that training helps avoiding fallacies. Framing effect. K&T's groundbraking example involving the same question phrased with saving lives rather than counting death shows the remarkable difference in preference despite equivalent formulations. Availabitilty bias. An experiment involving finding words with certain endings demonstrates that humans overestimate the frequency of events that are easier to recall. Even if that contradicts the laws of probability! Allais' paradox. The classique paper by French economist Allais criticising the American school's expected utility dogma. His conject that people overrate certainty, breaking the independence property, has been empirically validated, including by our studies with Warwick undergraduate. Ellsberg paradox. As brief look at ambiguity avoidance, which is illustrated and empirically demonstrated in his experiment with drawing balls of different colours involving some unspecified proportions. (Do the detailed calculations yourself on exercise sheet 4).

Resources Week 8:

Study advice Week 8:

  • Try to reproduce the proof of the formula for the expected number of runs. Where did we use independence? Where did we use the coin is fair? Is there not a problem with overlapping runs? Then probabilities might not be independent. Why is that not a problem in the calculation?
  • Try to generalise the result. What if the coin is not fair? (This will be a question on Exercise Sheet 5.)
  • Don't forget about Exercise Sheet 4.

Questions & Comments Week 8:

  • How can the run be used to decide how the sequence was generated? We only gave the idea, but this approach can actually formalised into a statistical test for randomness, the Runs test, also known as Wald-Wolfowitz runs test.
  • Do you believe these biases are real? Try some of these experiments with your friends, room mates, families. Version of the Mississippe experiment are easy to conduct, and so are the 1x2x3... question.
  • What are the implications of these biases in the reals world? Think of surveys, marketing, perceptions of dangers, media reporting, social media etc.
Week 7

Tuesday (L1): Example dominant moves in zero-sum game. We show an example of reducing a pay-off matrix of a 3x5 zero-sum game through removing moves dominated by others. In this case that leads to a unique solution of the game, though generally that would not be the case. Hand game: Player II puts one coin in the left hand or 2 in the right hand and Player I chooses. No dominant moves, not separable. Onsidering pure strategies, we can only give rather uninteresting bounds for gain and loss. However, with the introduction of mixed strategies we arrive at an interesting analysis of this game (assuming rational players).

Friday noon (L2): Fundamental theorem for zero-sum games (von Neumann). The payoffs of the maximin mixed strategies for the two players are the same. This justifies to call this value of the game. Examples. In situations where one of the players has only two moves, the maximin strategies can be determined by suitable algebraic representation or by simple calculations and graphical methods, which is demonstrated in a few examples.

Friday 4pm (L3): Proof of the Fundamental theorem. This is a quite complex proof (not examinable), but give a taste how techniques from functional analysis and smart construction of variations of the game in question can be used in a proof. Pure move equilibrium. A pair of moves is called equilibrium, if none of the players would gain from selecting a different move. In a number of example we determine equilibria by indicating preferences using arrows between states, which help to easily spot equilibria if present.

Resources Week 7:
Study advice in Week 7:
  • Remember that in the zero-sum game notation it is preferable for Player I to have bigger values in the payoff matrix, while Player II prefers smaller values. It's easy to get that wrong when switchen back and forth while reducing the matrix using dominance.
  • Check your initial understanding of Part II with this informal quiz. (Solutions will be posted in two weeks.)
  • If you have not yet done the informal quiz about Part I yet, do so now. Once you have done it, check out the solutions.

Questions and comments in week 7:

  • There a bit more material on game theory (not examinable) in the printed notes about Pareto optimal strategies and Nash equilibria (last section of Chapter 7 in the ST114 lecture notes).
Week 6

Tuesday (L1). Summary decision theory and objectives (Retrospective Part I). The central formula is the EMV optimisation strategy. For this we need the concept of subjective probability. A modification of this approach uses the concept of utility that allows to assign a subject and situation dependent value to raw outcomes. Utilities can be linked to preferences via representation theorems. Introduction to game theory. Model for 2-player games with one (simultaneous) move based on finite numbers of moves for each and payoff matrix. As example, we play RPS and get experience with uncertainties involved in games.

Friday noon (L2). Prisoner's dilemma. We play 10 rounds of this and observe that both cooperation or competition can happen, depending on who is playing. Often the players stay un in the same mode for a while. It is typical that several rounds before an antipated end of the game series players who use to play cooperatively start becoming more competitive. Love game. A very traditional economics textbook example, and full of clichees. A nice example, though for another type of dilemma, again a consequence of the inability of the two players to coordinate their actions. Seperability. A mathematical defintion for situations where the payoff matrix can be represented in a way that seperates the two players. As a consequence we get a simplified formula for optimal moves.

Friday 4pm (L3). Simple example for seperability. We derive a condition for a diagonal matrix to be separable. Prisoner's dilemma. A generalised version of the prisoner's dilemma turns out to be seperable under conditions. This can be derived by considering a suitable system of equations as described in the defition of separability. Dominance. Definition of dominant moves and some examples how to simplify games using the theorem that rational players play dominant moves. Zero-sum games. Definition, examples, maximin strategy and a paradox. The idea of randomization leads to the concept of mixed strategies.

Resources Week 6
Study advice for week 6
  • Have a look at the Summary Part I posted above to get an overview of the first 5 weeks. This show how the central goal in our decision theory unit (a formula for a strategy for an optimal decision) is derived based on the ingredients needed (subjective probability, rationality, utility, preferences).
  • Check your initial understanding of Part I with this informal quiz.
  • Work on Exercise Sheet 4.
Questions and comments in week 6
  • Why would communication in the prisoner's dilemma be sufficient to ensure the players arrive at a better solution for them? Don't they also need to stick to their promisses? Yes, spot on. I should have said collaboration. 
  • Check out computer simulations of rock-paper-scissors (see link and more on the right-hand side under Part II).
Week 5

Tuesday (L1). Binary relations. Properties Completeness (C), Assymmetry (A), Transitivity (T) and Negative Transitivity (NT) are introduced and their meaning is discussed wit h an emphasis real world examples. Completeness essentially means that people need to be able to make up their minds. In practice, it also requires availability of options. Transitivity can get lost through when moving from local to global preferences (e.g. grains of sugar in coffee). We show that a person who has non-transitive preferences could end up losing arbitrary amounts of money. This uses the money-pump argument based on that person's willingness to keep swapping items for a fee. An example for non-transitivity involved multivariate criteria (e.g. design, functions & price of phones).

Friday noon (L2). Numerical representation. A binary relation with (C), (A) and (NT) is called preference relation. We ask how this is related to utility. In brief, the concept of a numerical representation means that there is a function from the action space A to the real numbers that portraits the same order. We show any preference relation on a finite action space has a numerical representation. The same proof can be used for countable action spaces. For continuous action spaces we can come up with a similar proof if an additional condition is fulfilled: there needs to be a countable order dense subset. Numerical represenations are not unique.

Friday 4pm (L3). Archimedian (ARCH). A property that captures the concept that an order x>y>z is not necessarily distroyed by mixing a bit of z into x or by mixing a bit of z into x. Representation by von Neuman-Morgenstern. With the additional assumption of (ARCH) and (IND) (independence, see previous lectures), a preference relation can be represented in a specific form involving expectations and that represenation is unique up to linear transformations. Lexicographical order. We proof that the lexicographical order does not have a numerical representation. The proof is by constructing an injective map from the real numbers to the rational numbers, which is a contradiction to the real numbers being not countable.

Resources for Week 5
Study advice for week 5:
  • Work on exercise sheet 3
  • Come up with examples for preferences in real world situations.
  • Try the money pump method on your room mates.

Questions & comments in week 5:

  • Show that the lexicographical order is a preference relationship.
  • So... why does it not have a numerical representation then? (Answer will be given here next week.)
Week 4

Tuesday (L1). Medical test. Considering a modification of last week's medical example, we now have a test that screens for the presense of the disease. This makes the decision tree more complex. The optimal solution depends on both the prevalence and conditional probabilities describing the reliability of the test. Farmer. Choosing best crop based on harvest predictions for three different crops under different weather scenarios representing high, low and medium risk options. No probabilities are given. In that situation, maximin and maximax strategies that represent an optimist's and a pessimist's approach to decisions making and are very intuitive. A disadvantage is that they are driven by extremes. St Peterburg paradox. Following EMV strategy this turns out to be a game where people would be happy to pay any arbitrary amount of money to play (and potentially gain huge payoffs, albeit at very low probabilities). In reality people do not do so. This is because in reality time and resources are limited, which the model ignored and because the subjective value of e.g. gaining 1Mio pounds is not the same as the subjective value of gaining another 1Mio pounds given you already have the first. This can be overcome with the concept of utility to be discussed on Friday.

Friday noon (L2). CME and utility. Utility U is introduced as the inverse function of the certainty monetary equivalent of suitable bets. The EMV decision rule can be modified to incorporate utility by replacing the loss function L by U(L). A modified version of the burglary insurance example is used to illustrate this. Preferences. Properties of preference relations such as completeness, transience, independence and continuity are discussed and counterexamples are constructed.

Friday 4pm (L3). Shape of Utility and risk attitudes. If the (subjective) CME m(p) of a bet b(p) is smaller than the expected value E[b(p)], then the utility in concave and the person in risk averse. The difference is called risk premium. If they are equal, then the utility is linear and the person is risk neutral. If it is bigger, then the utility is convex and the person is risk seeking. Example fire insurance. Using a concave utility function (squareroot) for the house owner and a linear one for the insurer, a range for a deal is constructed. Lottery. Using a convex utility function (square) it is possible to explain why people buy lottery tickets at prices that are otherwise unfair.

Resources for week 4:
Study advice for week 4:
  • Work on exercise sheet 2.
  • Think of preference relations in real world examples that illlustrate properties like completeness, transience, independence and continuity. In particularly, find examples that do NOT have one (or more) of these properties.
  • Without looking at the lecture notes, draw graphs of convex and concave utility functions and link their shape to risk attitudes using bets.
Questions & comments in week 4:
  • This week we have introduced CME, utility and preferences. The first two are linked by definition. Next week, we will construct a link between utility and preferences (representation theorem).
  • In examples involving tests (e.g. medical) be very careful about what the given probabilities describing the reliability of the mean (e.g. "probability to get a positive test result given the person has the disease" is NOT the same a "probability to get a positive test result and have the disease").
Week 3

Tuesday (L1). Conditional probability example. Consider the probability of having two daughers conditioned on having one daugther. Compare with the probability of having two daughter conditioned on having one daugther born in the morning. Discussion why they are different. Expectation. Definition and properties of the expectation such as linearity. Nonlinear functions can not be taken out of the expectation without changing it. For example, we look at a quadratic function. This also leads to a discussion of the variance and the concept of correlation.

Friday noon (L2). Decision models and EMV decision rule. EMV decision rule, reward perspective, decision trees, eye disease treatment decision. As decision rule is in an algorithm to determine a decision under uncertainty based on a set D of decision options, as set Chi of outcomes with (subjective) probabilities p_i and a loss function L. One very common decision rule is the expected monetary value strategy (EMV) which defines the optimal decision d* as one that minimises the expected loss E[L(d,X)]. Sometimes, it is more natural to phrase the consequences using a reward function R rather than a loss function. No new model is used for this, but we simply define L=-R and obtain that d* maximises the reward. Decisions under uncertainty can be visualised with decision trees. Example about buying burglary insurance leads to a simple rule involving probabilities for burglary and the cost for insurance. With more complex and multistage decisions, these can get very big. Complex multilevel decision tree example (Oil drilling) involving drilling in one of two sites, conducting tests first or doing nothing. All relevant probability estimates are given, but it needs Bayes rule to derive all probabilities need to apply the EMV strategy to derive the optimal decision.

Friday 4pm (L3). Value of information. We look at an decision about drilling for oil in either field A or field B. The decision is made more complex by the additional options to conduct test drills in A or B which give some evidence but not certainty about the presence of oil in these locations. All relevant probability estimates are given, but it needs Bayes rule to derive all probabilities need to apply the EMV strategy to derive the optimal decision. An interesting concept in this context is the value of information. In fact, there are two alternative definitions. One allows for imperfect information that is typically practically available. The other one is for perfect information that may not be available, but it may still be of interest to calculate its values; for example to serve as an upper bound for the value of any even hypothetical sources of information. Example for a medical decision involving wait & watch versus prevention

Resources for week 3:
Study advice for week 3:
  • Try to build decision models from real life situations that involve uncertainty. E.g.: Wait for a bus or walk? By the bigger package (cheaper per unit, but you may not finish it). Discuss your models critically and modify.
  • Work on exercise sheet 2.
  • Get some practice with conditional probabilities. You could find examples in the ST111 notes.
Questions & comments in week 3:
  • Oil drilling example: Can we also have a model including the option to first test drill in A and then test drill in B? Yes, it will be a bigger decision tree again, but is a perfectly reasonable extention.
  • Is the loss function in the medical example appropriate? Only partly, because it does not address any reduction in quality of life from having to endure the condition until it is bad enough to show visible symptoms. It does not take into account the risks of the treatment, which is particularly relevant when applied to everybody. Decisions in medicine often require to consider difficult trade-offs.
Week 2

Tuesday (L1). Two people flipping coins: Construction of a product space modelling two people flipping coins independently. To determine the distribution of the first time both of them obtain heads, we represent this as a waiting time N for a suitable event. This takes us to repeated Bernoulli variables and N has a geometric distribution. Similarly, the we could determine the distribution of first time at least one of them obtains heads (homework, solution included in notes). Frequency interpretation: It would be intuitive to define probability as the limit of relative frequencies. We give a formal representation for this, which is useful for interpreting the meaning of probability. However, it can not be used as a definition of probability, because the latter would be needed for showing the existence of this limit in the first place. Subjective probability: There are situations where probabilities can not be derived from scientific models (using e.g. geometric properties, most notably symmetry). People have beliefs about probabilities, but they are subjective. Elicitation of such beliefs can be done through bets. This ties quantitative believes to behaviour. It implicitly assumes a few things about people, for example their willingness to enter bets and the independence of such collections of probabilities of the payoff (after normalisation), which will be discussed later in more detail.

Friday noon (L2). Elicitation: Subjective probability defined on bets can be elicited using reference models such as spinners and balls in urns. This allows ties probabilities to concrete and intuitive observations from the physical world. Dutch books: A Dutch book is a collection of bets that can never loose. Colloquially, it is often used to refer to risk-free opportunities for gains, which is slightly more, but is based on the same idea. In practice, these can be constructed if another person (or a market) works with incoherent probabilities by combining bets in a smart way. Rationality: A rational individual must be coherent, that is, his/her probability assignments obey the probability axioms. We give an explicit proof of the rule of the complement using a indirect approach. It is assumed that the individual's probability assignment for an event A and its complement do not add up to 1 and then constructs a Dutch book so the individual would inevitably end up loosing money. This contradicts the rationality assumption. More generally, a rational individual also obeys the addition rule (for an outline of the proof see ST114 lecture notes Theorem 3.2).

Friday 4pm (L3). Addition rule for bet based probabilities: An explicit derivation of the addition rule for subjective probabilities based defined by bets. This illustrates very nicely the connection between sets, bets and subjective probabilities and the relevance of the disjointness of the two sets in establishing the addition rule. Conditional probability: To motivate the use of conditional probabilities in this module we give two reasons. Firstly, they provide a framework for modifying beliefs based on experience expressed by condition (Bayesian updating) and this can be iterated multiple times (e.g. in machine learning). Secondly, they are being used for probabilities in models involving called-off bets. Review of conditional probability definition, total probability theorem, Bayes' rules. Testing for a condition: We calculate the chance that a positive test actually indicates a true result. A small probability for the condition can result in this number being surprisingly small. In other words, a very high false alarm rate (e.g. in our numerical example this was larger than 90%). Note that the probability for the condition also varies the population being tested. Examples for this can be tests for rare diseases, drug use, carrying a virus, mutations, prenatal screening.

Resources for week 2:
Study advice for week 2:
  • Try probability elicitation on some of your friends. Are the probabilities you obtain from them consistent? E.g. do they add up to one?
  • Work on exercise sheet 1.
  • Check out the additional examples of the relevant sections in the ST114 lectures notes not covered in lectures.
  • Watch out for random events in the 'real world' around you. Build probability spaces to model them. In particular, suggest alternative (sigma-)algebras reflecting different degrees of knowledge/detail of observation.

Questions & comments in week 2:

  • What is that script F? It is part of our model for a probability space. A probability space is a structure that allows you to discribe random events and assign probabilities to them. (As always in maths, we define structures together with operations. For example, a group is a set with a an operation defined between its elements.) In probability we start with outcomes forming the outcome space Omega. However, we often are interested not only in individual outcomes but in events, that is, subset of the outcomes space. So we also look at script F, a suitable collection of subsets of Omega. Script F could just be the collection of all (i.e. the power set of Omega), but to be more flexible in situations where we do not want (or can or need) to distinguish so finely, we could use a more corse collection. For this to make sense some rules are needed (i.e. closed under unions) and so the concept of algebra came about. Sigma-algebra is just an extension of this to (countable) inifinite situations. Now, we mathematically define a probability as a function of script F to the interval [0,1]. Now read the definitions in the notes again to get on top of the details...
  • But what about people who not bet? The approach of eliciting subjective probabilities through bets implicitly assumes a few things about people including their willingness to enter bets. It also assumes that people don't mind swapping between equivalent bets. This is just for simplicity, though, the whole theory could be rewritting including small values for the effort of swapping, like transaction costs in finance, or friction in mechanics.
  • What do you mean with independence of the payoff? After normalisation (dividing by m(M,Omega)), the effect of M should be neutralised. There is in fact evidence from psychological studies that people's behaviour does not change drastically as a function of how much money is being offered as long as it is in withing a normal range.
  • But wouldn't all people put m(M,Omega)=M? That would be the fair price. Rationally speaking, that is what people should be happy to pay (maybe minus a small amount of the effort of engaging in this hassle in the first place). However, this assumes rationality and as we will see this is not always the case.
Week 1

Tuesday (L1). Taster session. Decisions under uncertainty: Some examples including job offers, sport betting, non-emergency medicial treatment, St Peterburg paradox. Rational decision making has some, but not full validity for homo sapiens, the species actually populating this planet, and issue we will pick up again in Part III that deals with human fallacies in the processing risk and uncertainty. One of the oldest such phenomena is the gambler's fallacy that has been described in connection with huge losses at Monte Carlo in 1913 when black occurred 26 times in a row. As a first example for games we look at a matrix representation for RSP (rock paper scissors).

Friday noon (L2). Modelling decisions under uncertainty. Key ingredients for models are decision choices, outcomes, (subjective) probabilities. The goal is to determine optimal values of the outcomes, but what that is depends on priorities of the decision maker. The mathematical structure we use are probability spaces. What is probability? Normative probability (Kolmogorov's axioms) based on set algebras to model events as subsets of outcomes and probability measures as functions of these. In other words, probability measures are functions on set algebras. Concept of atoms as fundamental events. Rules ensure that our intuition about combinations of events translates into the right probability calculations. Examples involving coins, dices, house prices, national lottery - check out the amazing story of Toronto probability professor (Warwick Stats Public Lecture May 2017) discovering systematic lottery fraud...

Friday 4pm (L3). Examples to illustrate the use axiomatic probability. Two-headed coin: From a box with n-1 normal coins and one two-headed coin you draw a coin at random and toss it three times. It shows three heads. Which box do you think it came from, and how sure are you? The example is also discussed in Section 2.5.1 of the ST111 Lecture Notes (see r.h.s.). This is decision problem of the type seen in statistical tests, where the task is in "re-engineering" which was the random process that generated an outcome using likelihood ratios (statistical inference). Average class size: 

Resources for week 1:
Study advice for week 1:
  • Think about situations where you have to make decisions under uncertainty. Write down all ingredients for a model (options, outcomes, probabilities, values, priorities etc).
  • Review discrete probability theory.
  • Ask why the axioms of probability set up the way they are, e.g.: Why do we need the empty set to be in an algebra? Why the union? Why are atoms useful? Why does the additivity apply in the definition of probability measures assumes pairwise disjoint sets? What can you do if they are not disjoint? Why is there no rule for intersections?
  • Events are represented by subsets of the outcome space. Ask about the interpretation of the set operations, e.g.: What does intersection mean in term of events? What does union mean? What about the complement?
  • How can we construct a probability measure without knowing the probabilities for all of the outcomes?
  • Tackle exercise sheet 1.
Questions & comments in week 1:
  • How can we generalise the two-headed coin problem from Lecture 3?
  • Why do we need infinite products of algebras? Some random variables are not bounded, for example some waiting times (e.g. first time heads come up, first time we have 5 tails in a row).
  • But it doesn't take forever! Can't we just limit this at say 1 million coin tosses? Well, in practice we could approximate this by a finite model, but a pure mathematical model would not do so. While it may be extremely unlikely that in a regular coin there is not head in 1 million tosses, the probability is still not zero. Of course, you could obtain arbitrarily good approximations by setting increasingly large upper bounds, and that's a common technique both in practice and in proofs of asymptotic results.
  • Why do we need algebras at all? Can't we just define probability measures on the collection of all subsets of the outcomes space? Firstly, examples without finite horizon, such as the coin tosses above, the outcomes space is the set of all 0-1 sequences. This is of the same cardinality as a continuum, e.g. the interval [0,1] or the real numbers. The same is true, for example, about a number generator, that is, drawing a random number between 0 and 1. A deep results in measure theory says that it is impossible to construct a probability measure on all subsets of the real numbers. This is far(!) beyond the scope of ST222, but if you want to read about this surprising theory, then read about the Banach-Tarski paradox and Vitali sets, and it all starts with cutting chocolate. These lecture notes on measure theoretical framework for probability explain the motivation and basics very well. Secondly, there are practical reasons. Even if you were in a situation where you could define probability measures on all subsets on the outcome space in theory, you may not know in a given example, what they would be. You may determine the probabilities based on historical data or on believes people have, and that may not be available for all possible events. In the stock market, for example, you have precise information about the past and the present, but you only have rough or no information about the future. Or a person may have knowledge about some events, but not about others. To model situations with such incomplete information, you need to restrict the collection of subsets of the outcome space. But you can't just take any collection, you do have some basic rules about this (to ensure probability measures can actually be well defined on them), and they are the properties that make up the defition of algebras (and sigma-algebras).

decisiontree_haroldsplanet.png

Source: Harolds Planet Blog Archive

Participation

ST222 Forum - this is your space

Additional Material

Part I

Lottery retailer scandal: Article from Chance magazine by Prof Jeffrey Rosenthal (U of T)

Bias in coin flipping Stanford News, full paper Prof Diaconis et al

Books on decision science (normative theory)

Probability ST111 lecture notes

Philosophical discussion: "Dutch Book Arguments" by A. Hajek, in The Oxford Handbook of Rational and Social Choice, ed. P. Anand, P. Pattanaik, and C. Puppe. OUP 2008.

Community Blog LessWrong about refining rationality

Part II

Game theory net

UCB Game theory notes by Y. Peres (Chapter 3 and 4 provide examples related to this module)

Play RPS against computer trained on humans

Part III

Behavioural economics in baby steps

Books decision science (descriptive theory/behavioural approach)

Critique of homo economicus (nef)

Essay "Rationality, Self-interest, and Welfare..." by D. Hausman

Vaguely related

Book: Where mathematics comes from by George Lakoff and Rafael Nunez

SSLC feedback

SSLC Feedback 2016/17

Respone to final feedback 2015/16

Final module feedback 2015/16

Response to initial feedback 2015/16

Initial module feedback 2015/16

Response to initial feedback 2014/15

Response to final feedback 2014/15

Module summary

2014/15 summary

2016/17 Exam

Exam Paper and Solutions

Exam Feedback