Event Diary
CRiSM Seminar
Alexey Koloydenko & Juri Lember (Joint Talk), University of Nottingham
Adjusted Viterbi Training for Hidden Markov Models
The Expectation Maximisation (EM) procedure is a principal
tool for parameter estimation in hidden Markov models (HMMs). However, in
applications EM is sometimes replaced by Viterbi training, or extraction,
(VT). VT is computationally less intensive and more stable, and has more of
an intuitive appeal, but VT estimation is biased and does not satisfy the
following fixed point property: Hypothetically, given an infinitely large
sample and initialized to the true parameters, VT will generally move away
from the initial values. We propose adjusted Viterbi training (VA), a new
method to restore the fixed point property and thus alleviate the
overall imprecision of the VT estimators, while preserving the
computational advantages of the baseline VT algorithm. Simulations show
that VA indeed improves estimation precision appreciably in both the
special case of mixture models and more general HMMs.
We will discuss
the main idea of the adjusted Viterbi training. This will also touch on
tools developed specifically to analyze asymptotic behaviour of maximum a
posteriori (MAP) hidden paths, also known as Viterbi alignments. Our VA
correction is analytic and relies on infinite Viterbi alignments and
associated limiting probability distributions. While explicit in the special
case of mixture models, these limiting measures are not obvious to exist for
more general HMMs. We will conclude by presenting a result that under certain
mild conditions, general (discrete time) HMMs do possess the
limiting distributions required for the construction of VA.