Skip to main content Skip to navigation

Shreya Sinha Roy

I am a 3rd year PhD student working under the supervision of Dr. Ritabrata Dutta, Dr. Richard Everitt, and Prof Christian Robert. My primary research interests include computational statistics, Bayesian inference, sampling for high-dimensional parameter spaces, and reinforcement learning.

Bayesian Deep Generative Reinforcement Learning:


Bayesian deep RL: The above diagram shows an episodic posterior update followed by a policy update routine for jth episode j=1,2, .... We have assumed the episode length to be tau, and pit denotes the prior on the model parameters. Prequential Scoring Rule (s) is used to compute the generalized posterior which is based on true interaction data(x) and a simulation(xt) from the deep generative model m. We obtain the samples, thetas from the generalized posterior, post via Sequential Monte Carlo (SMC) samplers. These samples are used to simulate n trajectories of interaction sim from the model m. Optimal policy is trained by maximizing the averaged value of q function computed from the n simulated trajectories. The new policy, mu is then used to interact with the true Environment in the next episode.

Publication:

Sinha Roy, S., Everitt, R., Robert, C., Dutta, R. (2024) Generalized Bayesian deep reinforcement learning, arXiv preprint arXiv:2412.11743Link opens in a new window

Shreya Sinha Roy

Contact

Shreya.Sinha-Roy@warwick.ac.uk