Skip to main content Skip to navigation

Feng Liang: Bayesian Learning with Overcomplete Sets

An important problem in statistics is to retrieve a function or a signal from noisy massive data. In contrast to the orthonormal basis traditionally used in function estimation, overcomplete (or redundant) representations have been advocated due to their flexibility and adapatation. Bayesian methods provide several advantages in learning an overcomplete representation: regularization is specified through priors; inferences on hyperparameters are easily obtained via Markov Chain Monte Carlo; probabilistic outcomes provide a full spectrum to summarize the prediction or estimation. Our recent progress on Bayesian inference with overcomplete wavelet dictionary and reproducing kernel Hilbert space will be presented along with examples.