I am seeking to identify general computational principles underlying value-based decision making, combining neural and behavioral perspectives. I am particularly interested in answering specific questions such as how humans and other animals develop and evaluate new and untried alternatives? How we seek (or avoid) information? How is curiosity controlled? Which factors govern curiosity? Computational frameworks such as reinforcement learning algorithms provide a useful theoretical basis for analyzing and understanding these questions.
My goal is to develop a good algorithm in explaining the curiosity of humans and other animals. On the flip side, a good computational model of curiosity can assist the reverse engineering of an artificial agent that explores the world better given constrained time and computations. This is also similar to the state-of-art exploration algorithm emerging from artificial intelligent researches which helps to better understand curious behaviors of humans and other animals in real life.