Skip to main content Skip to navigation

APTS CIS Lab 2

The first problem sheet contained several problems and if you want to continue working on those that is fine; this sheet just contains a couple of simple questions to give you a chance to try some Markov chain-based problems.

  1. A warm-up which also appeared in the preliminary material; if you’ve never implemented something like this before then this might be a useful preliminary step.

    In a simplified model of the game of Monopoly, we consider the motion of the piece around a loop of 40 spaces. We can model this as a Markov chain with a state space consisting of the integers \(0,\dots,39\) in which the transition kernel adds the result of two six-sided dice to the current state modulo 40 to obtain the new state.

    1. Implement a piece of R code which simulates this Markov chain.

    2. Run the code for a large number of iterations, say \(100,000\), and plot a histogram of the states visited.

    3. Based on the output of the chain, would you conjecture that there is an invariant distribution for this Markov chain? If so, what?

    4. Write the transition kernel down mathematically.

    5. Check whether the Markov kernel you have written down is invariant with respect to any distribution conjectured in part (c).

  2. An actual Gibbs Sampler.

    Recall the Poisson changepoint model discussed in lectures, and on p21-22 of the supporting notes, and think about the following closely related model: Observations \(y_1,\dots,y_n\) comprise a sequence of \(M\) iid \({\textsf{N}\left( \mu_1,1 \right)}\) random variables followed by a second sequence of \(n-M\) iid \({\textsf{N}\left( \mu_2,1 \right)}\) random variables. \(M\), \(\mu_1\) and \(\mu_2\) are unknown. The prior distribution over \(M\) is a discrete uniform distribution on \(\{1,\dots,n-1\}\) (there is at least one observation of each component). The prior distribution over \(\mu_i\) (\(i=1,2\)) is \({\textsf{N}\left( 0,10^2 \right)}\). The three parameters are treated as being a priori independent.

    1. Write down the joint density of \(y_1,\dots,y_n,\mu_1,\mu_2\) and \(M\), and obtain the posterior distribution of \(\mu_1,\mu_2\) and \(M\), up to proportionality, in as simple a form as you can.

    2. Find the “full conditional” distributions of \(\mu_1\), \(\mu_2\) and \(M\). (i.e. the conditional distributions of each of these variables given all other variables).

    3. Implement a Gibbs sampler making use of these full conditional distributions in order to target the posterior distribution identified in part (b).

    4. Simulate some data from the model for various parameter values and test your Gibbs sampler.

    5. How might you extend this algorithm if instead of a changepoint model you had a mixture model in which every observation is drawn from a mixture, i.e.: \[Y_1,\dots,Y_n \overset{\text{iid}}{\sim}p \textsf{N} \left(\cdot ; \mu_1,1 \right) + (1-p) \textsf{N} \left(\cdot ; \mu_2,1 \right).\] (The likelihood is now \(\prod_{i=1}^n [p \textsf{N} \left(y_i ; \mu_1,1 \right) + (1-p) \textsf{N} \left(y_i ; \mu_2,1 \right)]\), with \(p\), \(\mu_1\), and \(\mu_2\) unknown (and \(M\) is no longer a parameter of the model.)

      Consider the following things:

      1. The prior distribution over \(p\).

      2. Any other variables you may need to introduce.

      3. The resulting algorithm.

      If you have time, implement the resulting algorithm and apply it to some simulated data.