Building Bridges: A Symposium on Human-AI Interaction
Call for Papers
We are pleased to invite you to participate in an interdisciplinary event Building Bridges: A Symposium on Human-AI Interaction, which will take place at the University of Warwick and online via MS Teams on 21 November 2025.
This symposium seeks to explore the dynamics of human-AI interaction through the lenses of language, culture, and identity. As AI technologies rapidly evolve, they are not only transforming the technological landscape but also reshaping how we communicate, express identities, and negotiate social norms across languages and cultures.
AI systems, ranging from healthcare chatbots and customer service assistants to AI language tutors, are designed to simulate human-like interactions through contextually relevant, real-time responses. While these technologies hold vast potential to support human users by facilitating access to information, offering advice, and even providing companionship, they face persistent challenges: limited multilingual capacities, inadequate conversational capabilities and misattributions of interactional roles, emotional inabilities, relational misalignments, and cultural insensitivity. These limitations are not only technical but also deeply social, as they affect how users’ identities are recognised, represented, or sometimes misrepresented in interaction with AI. Addressing these limitations requires urgent, interdisciplinary engagement to ensure AI develops in ways that are socially, culturally, and ethically grounded.
We welcome contributions that critically examine how human users’ global engagement with AI is shaping and being shaped by communication, identity, and social practices throughout the human-AI interactional process. Submissions are encouraged from a wide range of perspectives, including applied linguistics, sociology, media studies, computer science, and related disciplines, provided they maintain a clear focus on language, culture, and identity. Researchers at all career stages are warmly invited to contribute, with early-career researchers especially encouraged to participate.
Event Detail
- Time: 9:30am - 5pm, Friday 21 November 2025
- Location: C0.02 & C0.08, Zeeman BuildingLink opens in a new window, University of Warwick / online via MS Teams
- Registration fee: Free
- Registration deadline: Sunday 16 November 2025
Please note that space is limited, and early registration is encouraged.
Highlights
The symposium will feature:
- Two keynote speeches from leading scholars
- An experiential and creative AI-avatar networking activity
- Engaging lightning talks showcasing diverse perspectives
Join us in this collaborative space to exchange ideas, foster interdisciplinary dialogue, and imagine new pathways for understanding the human-AI relationship.
Submission of Abstract
We invite contributions for 10-minute lightening talks at this hybrid symposium. Presenters are encouraged to share work at any stage of development, including ongoing projects, as the aim of symposium is to foster dynamic discussions and collaboration.
- Abstracts should be a maximum of 300 words (including references, with the aim, methods, previous research/theory and (expected) results of the study clearly stated)
- Please submit here: https://forms.office.com/e/TAKKBv2vfT?origin=lprLinkLink opens in a new window
- Submission deadline: Sunday 12 October 2025
- Notification of acceptance: Monday 27 October 2025
Research in Conversation: Public Engagement Scheme
We are delighted to announce an opportunity for selected presenters of this symposium to participate as guest speakers on the flagship podcast series AI Ethics Now hosted by Dr Tom Ritchie. Supported by the Institute for Advanced Teaching and Learning at the University of Warwick, AI Ethics Now is a podcast dedicated to exploring the complex issues surrounding AI from a non-specialist perspective, including bias, ethics, privacy, and accountability. This scheme aims to co-create public conversations about human-AI interaction and explore the impact of interdisciplinary research in this area.
Keynote Speakers
Morning talk on 21 November
Prof. Ema Ushioda, University of Warwick
Ema Ushioda is a professor in Applied Linguistics at the University of Warwick, where she served as head of department from 2018-23. She has been working in language education for 40 years and has long-standing research interests in motivation and autonomy in language learning. She has published widely in these areas, especially in collaboration with the late Zoltán Dörnyei, with whom she produced an edited book, a special issue of The Modern Language Journal, and the second and third editions of Routledge’s Teaching and Researching Motivation. Ema’s more recent work has focused on the ethical and social values of motivation research, which she discusses in her bookLanguage Learning Motivation: An Ethical Agenda for Research (Oxford University Press, 2020). She is now applying this interest in ethical issues to researching human-AI academic writing practices and motivations, and is currently leading a project on this topic funded by a Leverhulme Trust Research Project Grant (https://warwick.ac.uk/fac/soc/al/research/aiwriting).
Ethical Perspectives on Human-AI Academic Writing: Challenges and Opportunities
There is growing recognition that AI tools may help to streamline research and writing processes, allowing scholars to be more creative, productive and efficient. Yet there are also widespread ethical concerns around the potential for dishonesty and fraud, such as using AI to ghostwrite academic papers or fabricate research. However, while many academic journals now require authors to declare their use of AI, little consensus exists on what constitutes acceptable or unacceptable use, making it challenging to navigate academic integrity boundaries. Navigation is even more challenging for scholars for whom English is not their dominant language and who may wonder how far they can use AI to draft, edit, transform or even translate their manuscripts. From the ethical perspective of equality, diversity and inclusion, AI tools could potentially create a more level playing field for our global multilingual academic community, in line with UNESCO’s (2021) principles for reducing language barriers in the promotion of open science. However, publishers generally adopt a restrictive position on AI use by authors, while simultaneously selling professional (human) editing and translation services for those able to pay.
In this talk, I will thus address some ethical perspectives on human-AI academic writing and critically discuss (a) the challenges of navigating academic integrity boundaries and (b) the opportunities to create a more equitable and inclusive publishing landscape.
Afternoon talk on 21 November
Dr Adam Brandt, Newcastle University
Adam Brandt is Senior Lecturer (Associate Professor) of Applied Linguistics at Newcastle University, where he was head of Applied Linguistics & Communication from 2018-2023. He researches social interaction, and particularly how people communicate with/through communication technologies such as videoconferencing platforms, virtual reality and conversational AI. He is also interested in exploring the ways Applied Linguists can collaborate with industry to inform the design of conversational AI agents.
In 2023, he was awarded a British Academy Innovation Fellowship for his work with a digital health startup on identifying principles of effective conversational AI. He was also awarded a JSPS Bridge Fellowship for research on communicative technologies in intercultural contexts. He is currently working on an interdisciplinary academic-industry collaborative project, funded by Innovate UK, on encoding empathy into conversational AI for healthcare consultations.
Building bridges between applied linguistics and industry: the case of conversation analysts and an AI clinical assistant
Alongside the rapid growth of artificial intelligence in recent years has come the emergence of artificial sociality – technologies that simulate social behaviour (Natale & Depounti, 2024). From chatbots to voice assistants to social robots, these systems are increasingly embedded in our daily lives. This raises important questions around how users can effectively engage with such technologies.
In this talk, I will draw on an ongoing collaboration between a team of conversation analysts and engineers at a digital health startup, to argue that Applied Linguists are uniquely placed to address both questions. With expertise in the observation, description, and analysis of language and social interaction at the fine-grained level, and an understanding of language as central to human sociality, applied linguists can shed light on the communicative opportunities and challenges posed by conversational AI agents. We can also work with engineers to help develop agents which leverage human communicative practices, minimising disruption for users hitherto used to interacting with other humans.
I will demonstrate how insights from conversation analysis helped to inform the design of an AI-powered clinical assistant for healthcare consultations. I will also explore the practices users employ when engaging with the system, and consider the implications for identity, interculturality, and the role of empathy in human-AI interaction.
Programme
C0.02, Zeeman Building, University of Warwick
Register and pick up your symposium package.
Enquiries
Please contact:
Organising Committee
Dr Yanyan Li, Xianzhi Chen, Kaiqi Yu
This event is jointly funded by the IAS Conversations Scheme, Institute of Advanced Study, and the Networking Fund, Doctoral College, University of Warwick, and is sponsored by Warwick Students' Union.
Registration
This form is closed and is no longer accepting any submissions. Thank you for your time.