Three AI-related papers from CIM presented @ CSCW and AIES this week!
 
 
    Three AI-related papers from CIM presented @ CSCW and AIES this week!
Researchers from CIM are presenting three papers on AI related research across two conferences this week — two papers at the Artificial Intelligence, Ethics, and Society (AIES)Link opens in a new window conference in Madrid and one at the ACM SIGCHI (Association for Computing Machinery Special Interest Group on Computer-Human Interaction) Conference on Computer-Supported Cooperative Work & Social Computing (CSCW)Link opens in a new window in Bergen, Norway. While AIES brings together perspectives on AI from Computer Science, Law/Policy, Social Sciences, Ethics and Philosophy, CSCW focuses on research in how technologies affect groups and other organizations/social structures.
Paper 1
Are our existing scientific research methods sufficient to study the dynamic and relational interactions emerging between humans and interactive AI systems? While much of the current debate in AI ethics and governance focuses on risks and accountability, far less attention has been paid to the methodological challenges of studying these evolving relationships. Traditional research methods often struggle to capture the fluid, long-term nature of human–AI interactions that develop over time. In this paper, we propose the concept of interaction-centric knowledge, real-world, evidence-based insights into how human–AI relationships evolve and shape human behavior, as a foundation for more human-centered AI governance.
Pi, Y., Turkay, C. and Bogiatzis-Gibbons, D., 2025, October. Interactive AI and Human Behavior: Challenges and Pathways for AI Governance. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 8, No. 3, pp. 2016-2029). Link to the paperLink opens in a new window
Paper 2:
How can we make sense of the proliferation of claims about the “increasing” agency of AI and the growing crisis of AI accountability? In this paper, we draw on sociology and linguistic anthropology to argue that agency is a fundamentally relational and/or social concept, requiring (for example) that any given “agent” is not in fact “agentic” until they can be held accountable or blamable (practically and/or legally) for their actions. As such, contemporary AI agents — often large language models (LLMs) enabled with the facility to use external tools or function calls — can be said to lack agency, despite the (implicit or explicit) claims of some recent AI safety studies. This work can in turn inform currently-unfolding debates in AI research and policy on questions of AI liability by drawing connections with existing legal scholarship.
Timaite, G., & Castelle, M. (2025). Agents Without Agency: Anthropological and Sociological Lessons for Contemporary AI Research and Policy. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(3), 2493-2507. Link to the paperLink opens in a new window
Paper 3:
Domain Experience and Expertise in Explainable AI Applications: A Bearing Fault Diagnosis Case Study. This work examines how domain expertise and practical experience influence practitioners’ interpretation and acceptance of AI explanations in high-risk decision-making contexts. Through an empirical study with manufacturing practitioners using a simulated bearing fault diagnostic task, we identified three distinct user profiles exhibiting different patterns of trust and reasoning with XAI. Beyond these empirical findings, we contribute two methodological innovations: (1) integrating industry practitioners as co-researchers and interview facilitators; (2) using sketch analysis to visualise and trace users’ diagnostic reasoning.
Zhao, Z., Castelle, M. and Turkay, C., 2025. Domain experience and expertise in explainable AI applications: a bearing fault diagnosis case study. Proceedings of the ACM on Human-Computer Interaction, 9(7). Link to the paperLink opens in a new window