Skip to main content Skip to navigation

Vaccine attitudes detected in tweets by AI model

· An intelligent AI model developed by the University of Warwick can detect social media users’ stances and concerns towards vaccinations.

· The Vaccine Attitude Detection (VADet) Model only needs to be trained first on a small number of sample tweets with stances pre-identified by researchers before carrying out larger analyses.

· It analysed 1.9 million tweets and learned to identify a person’s viewpoints on aspects related to vaccination such as safety, side effects, immunity level and conspiracy beliefs, just to name a few, mentioned in a post.

· Could potentially help healthcare organisations and government agencies address vaccine hesitancy and combat misinformation

People’s attitudes towards vaccines can now be detected from their social media posts by an intelligent AI model, developed by researchers at the University of Warwick.

The AI-based model can analyse a social media post and establish its author’s stance towards vaccines, by being ‘trained’ to recognise that stance from a small number of example tweets.

As a simple example, if a post contains mentions of mistrust in healthcare institutions, a fear of needles, or something related to a known conspiracy theory, the model can recognise that the person who wrote it likely feels negatively towards vaccinations.

The research funded by UK Research and Innovation (UKRI), is to be presented today (12 July) at the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics.

It is led by Professor Yulan He of the University’s Department of Computer Science, who is supported by a 5-year Turing AI Fellowship funded by the EPSRC.

Professor He and her colleagues at the University of Warwick have used a dataset of 1.9 million tweets in English, posted from February to April 2021, to develop the Vaccine Attitude Detection (VADet) Model.

VADet first analysed the stream of tweets concerning COVID-19 vaccines, learning an ever-increasing variety of elements and contexts pertinent to the ongoing vaccination debate. Then, the model gradually narrowed down its analyses by looking at patterns characterising user’s concerns and attitudes.

VADet looks for statistical patterns in words relating to different topics or stance. It is built on a large-scale language model pre-trained on a large amount of text from English books and Wikipedia and has already gained some linguistic knowledge. It was then trained using vaccine-related tweets so that it understands what topics have been discussed in those tweets.

A small amount of those tweets were then manually labelled by the researchers with information on the user’s stance towards topics discussed in vaccine-related tweets. VADet can leverage such a small amount of labelled tweets to distinguish semantic information relating to stance and topic from the remaining unlabelled tweets.

The AI model then arranged the tweets into clusters of similar aspects, forming geometric patterns that visually demonstrate how certain viewpoints on vaccinations (pro-vaccination, anti-vaccination, or neutral) can be linked with specific detectable characteristics or references in a social media post.

The model could potentially be used to provide insights into why people are negative about vaccination, information that government and health organisations can use to design better targeted messages to reassure the general public about vaccination.

Professor Yulan He from Warwick’s Department of Computer Science and AI Acceleration Fellow at The Alan Turing Institute commented:

“The COVID pandemic intensifies the use of social media. People express their attitudes towards matters relating to public health, including COVID-19 vaccinations. We have shown that it’s possible to monitor social media traffic, detect vaccine attitudes and segment tweets into clusters discussing similar aspects. Such real-time monitoring of public attitudes could help healthcare organisations and government agencies address vaccine hesitancy and combat misinformation regarding vaccines in a timely manner.”

The key to the breakthrough lies in the specially developed algorithm, which has two crucial capabilities. Firstly, it can leverage large-scale social media data about vaccination to detect topics automatically. This is done by inserting a topic layer into an existing pre-trained language model.

Secondly, the algorithm can be adapted on a small set of social media posts labelled with vaccine attitudes to automatically detect particular patterns of topics and topic-associated attitudes. “This so-called adaptive self-improvement capability has not previously been explored for vaccine attitude detection,” says Lixing Zhu, a PhD student at the Warwick’s Department of Computer Science who implemented the VADet model.

Professor He added: “The WHO identified vaccine hesitancy as one of the top ten health threads to the world in 2019. By automatically detecting vaccine attitudes from social media, our solution has the potential to enable more timely intervention to address concerns towards vaccination.”


Note to editors:

· The research has been summarised in a paper which will be presented in the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Co-authors are PhD students, Lixing Zhu and Zheng Fang; postdoctoral research associate Gabriele Pergola; and Social Informatics professor and Fellow at The Alan Turing Institute Rob Procter.

· The research project entitled ‘Learning from COVID-19: An AI-enabled evidence-driven framework for claim veracity assessment during pandemics’ is funded by the EPSRC. (EPSRC funded project link:; project website:


University of Warwick press office contact:

Simmie Korotane

Media Relations Manager (Warwick Medical School and Department of Physics) | Press & Media Relations | University of Warwick