Skip to main content Skip to navigation

SAMPLING project

Over the past two decades, Bayesian models have been used to explain behaviour in domains from intuitive physics and causal learning, to perception, motor control and language. Yet people produce clearly incorrect answers in response to even the simplest questions about probabilities. How can a supposedly Bayesian brain paradoxically reason so poorly with probabilities? Perhaps brains do not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead they could be approximating Bayesian inference through sampling: drawing samples from a distribution of likely hypotheses over time.

This promising approach has been used in existing work to explain biases in judgment. However, different algorithms have been used to explain different biases, and the existing data does not distinguish between sampling algorithms. The first aim of this project is to identify which sampling algorithm is used by the brain by collecting behavioural data on the sample generation process, and comparing it to a variety of sampling algorithms from computer science and statistics. The second aim is to show how the identified sampling algorithm can systematically generate classic probabilistic reasoning errors in individuals, with the goal of upending the longstanding consensus on these effects. Finally, the third aim is to investigate how the identified sampling algorithm provides a new perspective on group decision making biases and errors in financial decision making, and harness the algorithm to produce novel and effective ways for human and artificial experts to collaborate.

Since the beginning of the project, we have worked on the theoretical framework underlying the project and have explored how sampling can explain individual probabilistic reasoning errors. In particular, we have developed a model, the Bayesian sampler, of how people might make estimates from samples, trading off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. Another success is in showing how a particular form of sampling can explain how non-informative information can bias judgments, known as the dilution effect.

The Bayesian sampler turns out to provide a rational reinterpretation of “noise” in a state-of-the-art model of probability judgment, making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities, and we have shown in new experiments that this model better captures these judgments both qualitatively and quantitatively, going beyond the state of the art.

Publications:

Zhu, J.-Q., Sanborn, A.N. & Chater, N. (2020). The Bayesian sampler: generic Bayesian inference causes incoherence in human probability. Psychological Review. https://doi.org/10.1037/rev0000190.

Sanborn, A.N., Noguchi, T., Tripp, J. & Stewart, N. (2020). A dilution effect without dilution: When missing evidence, not non-diagnostic evidence, is judged inaccurately. Cognition, 196, 104110. https://doi.org/10.1016/j.cognition.2019.104110.