Skip to main content Skip to navigation

Gabriele Pergola

Gabriele Pergola is an Assistant Professor at the Department of Computer Science.

Research Interests

His current research investigates the use of statistical models within machine learning for natural language processing and text understanding. He is particularly interested in topics related to sentiment analysis, question answering, topic and event extraction, and clinical text mining. More broadly, he is curious about the relationship between language and information.

He was previously appointed as Research Fellow working with Prof. Yulan HeLink opens in a new window as part of the Event-Centric Framework for Natural Language Understanding, a Turing AI Acceleration Fellowship held by Prof. Yulan HeLink opens in a new window, and funded by the UKRI.

He collaborates closely with law enforcement agencies and the Forensics Capability Network (FCN) to develop advanced NLP frameworks that enhance investigative capabilities. His projects include designing novel AI systems to detect messages related to violence against women and girls (VAWG) and analyzing drug-related communications and networks, designing and delivering cutting-edge NLP technology to professionals working on the front lines of public safety.

In addition to his work with forensic analysis, he advises both law enforcement and the UK government as part of the Academic Advisory Group on Generative AI Detection R&D. He also collaborates with the Ministry of Housing, Communities & Local Government (MHCLG) on projects that leverage NLP for policy automation, as well as on separate projects analysing digital immigration systems to provide insights into migrant experiences and support informed policy-making.

Reading groups

Teaching

  • CS918 Natural Language Processing - Module Leader (Term II, 2022/2023, 2023/2024)
  • CS918 Natural Language Processing - Module Organizer (Term I, 2021/2022)
  • CS918 Natural Language Processing - Lab Demonstrations and Seminars (Term I in 2018/2019, 2019/2020, 2020/2021).
  • CS909 Data Mining - Lab Demonstrations and Seminars (Term II in 2018/2019 and 2019/2020).
  • CS331 Neural Computing - Seminars (2019/2020)

Education

He received his PhD degree in natural language understanding from the University of WarwickLink opens in a new window (UK) on "Probabilistic Neural Topic Models for Text UnderstandingLink opens in a new window".
He holds a BSc and an MEng (cum laude) degree in Computer Engineering from the University of Palermo (Italy). Prior to joining Warwick, he received a Postgraduate Fellowship from the University of Rome "La Sapienza" (Italy) for designing and implementing machine learning systems to support access to cultural heritage as part of the research project "Design and development of innovative technologies for the enjoyment of cultural heritage".

Awards and Recognitions

  • 2022 - Turing Post-Doctoral Enrichment Scheme - Awarded by the Alan Turing Institute (ATI), London, UK.

  • 2022 - PhD Thesis Prize - Faculty of Science, Engineering, and Medicine (SEM) Thesis Prize in Computer Science. Awarded by the University of Warwick, Coventry, UK.

  • 2019 - Best Presentation - Prize in the Machine Learning and AI track, WPCCS 2019, University of Warwick.

Research Projects

Recent Invited Talks

  • 2023 - "Large Language Models and The Key Ingredients Powering the Rise of Chatbots" - Keele University
  • 2023 - "The not-so-silent AI revolution: Chatbots and their Impact on Engaging Education" - Department of Psychology - University of Warwick

Publications

2024

2023

2022

2021

Until 2020

  • J. Lu, G. Pergola, L. Gui, B. Li and Y. He. CHIME: Cross-passage Hierarchical Memory Network for Generative Review Question Answering, The 28th International Conference on Computational Linguistics (COLING), Dec. 2020. [code]
  • L. Gui, J. Leng, G. Pergola, Y. Zhou, R. Xu and Y. He. Neural Topic Model with Reinforcement Learning. Conference on Empirical Methods in Natural Language Processing (EMNLP), Hong Kong, China, Nov. 2019
    @inproceedings{gui_rl2019,
        title = "Neural Topic Model with Reinforcement Learning",
        author = "Gui, Lin  and
          Leng, Jia  and
          Pergola, Gabriele  and
          Zhou, Yu  and
          Xu, Ruifeng  and
          He, Yulan",
        booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
        month = nov,
        year = "2019",
        address = "Hong Kong, China",
        publisher = "Association for Computational Linguistics",
        url = "https://www.aclweb.org/anthology/D19-1350",
        pages = "3478--3483"
    }
    
    In recent years, advances in neural variational inference have achieved many successes in text processing. Examples include neural topic models which are typically built upon variational autoencoder (VAE) with an objective of minimising the error of reconstructing original documents based on the learned latent topic vectors. However, minimising reconstruction errors does not necessarily lead to high quality topics. In this paper, we borrow the idea of reinforcement learning and incorporate topic coherence measures as reward signals to guide the learning of a VAE-based topic model. Furthermore, our proposed model is able to automatically separating background words dynamically from topic words, thus eliminating the pre-processing step of filtering infrequent and/or top frequent words, typically required for learning traditional topic models. Experimental results on the 20 Newsgroups and the NIPS datasets show superior performance both on perplexity and topic coherence measure compared to state-of-the-art neural topic models.
    
  • G. Pergola, L. Gui and Y. He. TDAM: a Topic-Dependent Attention Model for Sentiment AnalysisLink opens in a new window. Information Processing and Management, 56(6):102084, 2019.
    @article{pergola19tdam,
            title = {TDAM: A topic-dependent attention model for sentiment analysis},
            author = {Gabriele Pergola and Lin Gui and Yulan He},
            journal = {Information Processing \& Management},
            year = {2019},
            publisher = {Elsevier},
            volume ={56},
            number = {6},
            pages = {102084},
            year = {2019},
            issn = {0306-4573},
            url = {http://www.sciencedirect.com/science/article/pii/S0306457319305461}
    }
    We propose a topic-dependent attention model for sentiment classification and topic extraction. Our model assumes that a global topic embedding is shared across documents and employs an attention mechanism to derive local topic embedding for words and sentences. These are subsequently incorporated in a modified Gated Recurrent Unit (GRU) for sentiment classification and extraction of topics bearing different sentiment polarities. Those topics emerge from the words’ local topic embeddings learned by the internal attention of the GRU cells in the context of a multi-task learning framework. In this paper, we present the hierarchical architecture, the new GRU unit and the experiments conducted on users’ reviews which demonstrate classification performance on a par with the state-of-the-art methodologies for sentiment classification and topic coherence outperforming the current approaches for supervised topic extraction. In addition, our model is able to extract coherent aspect-sentiment clusters despite using no aspect-level annotations for training.
    
  • G. Pergola, Y. He and D. Lowe. Topical Phrase Extraction from Clinical Reports by Incorporating both Local and Global ContextLink opens in a new window. The 2nd AAAI Workshop on Health Intelligence (AAAI18), New Orleans, Louisiana, USA, Feb. 2018.
    @inproceedings{pergola18,
    title = "Topical Phrase Extraction from Clinical Reports by Incorporating both Local and Global Context",
    author = "Gabriele Pergola and Yulan He and David Lowe",
    booktitle = "The 2nd AAAI Workshop on Health Intelligence (AAAI)",
    year = "2018",
    month = jun,
    day = "20",
    language = "English",
    pages = "499--506"
    }
    
    Making sense of words often requires to simultaneouslyexamine the surrounding context of a term as well as theglobal themes characterizing the overall corpus. Severaltopic models have already exploited word embeddingsto recognize local context, however, it has been weaklycombined with the global context during the topic inference.This paper proposes to extract topical phrasescorroborating the word embedding information with theglobal context detected by Latent Semantic Analysis,and then combine them by means of the Polya urn ´model. To highlight the effectiveness of this combinedapproach the model was assessed analyzing clinical reports,a challenging scenario characterized by technicaljargon and a limited word statistics available. Resultsshow it outperforms the state-of-the-art approaches interms of both topic coherence and computational cost.
    
  • , , , , . (AI*IA), 294-307, .
  • , , . 339-346, (CompSysTech), 
  • , , M., , , , , , , . (AVI),

2024/2025 - PhD Scholarships

We have multiple PhD scholarships available for International / EU / Home Students.

If you are interested in applying for a PhD position in my group, please check our recent publications to ensure there is an alignment between your interests and our research activities. Please email me attaching your CV, academic transcripts (UG/PG), and a short PhD proposal.

Personal Photo

Contact

Room CS2.34,
Computer Science Department,
University of Warwick,
Coventry,
CV4 7AL

gabriele dot pergola dot 1 at warwick dot ac dot uk