Skip to main content Skip to navigation

Panel 1 - Machine, Human and the Social

Panel 1 - Machine, Human and the Social

Speakers:

Dongdi Chen (PGT, Warwick Business School)

Aswini Misro (PGT, Warwick Business School)

Zibin Zhao (PGR, Centre for Interdisciplinary Methodologies)

Erkang Fu (PGT, Centre for Interdisciplinary Methodologies)

Chair:

Abdullah Safir (PGT, Centre for Interdisciplinary Methodologies)

How digitalisation reshapes Decision making process and driving innovation: From the perspective of mechanism construction for effective human-machine collaboration

Dongdi Chen (PGT, Warwick Business School)

Keywords: Digital Transformation, Human-Machine Collaboration, Innovative Decision-Making, Organizational Architecture

With the spread of Covid-19 and constantly changing anti-epidemic policies at the socio-economic level, most organizations today appear to require deep transformational changes to properly deal with the uncertainty and thus gain competitive advantages under ever-changing circumstances. Compared with the shallow upgrading changes in production technology, decision making plays the central role within organizations due to its ubiquity and navigating role in management, and in the digital era, effective decision making necessitates information processing and validate prediction to eliminate cognitive bias of humans and thus optimise performance. Recent years, with the continuous increase of computation power and updated algorithms, the execution of machines has extended to functions beyond humans’ reach, which makes effective human-machine interaction one of the key factors for the further utilisation of AI/ML, especially in the area of decision-making where machine intelligence could help mitigate the adverse effects of human’s innate limitation on decision-making outcomes. However, due to the lack of the understanding of the capabilities of machine as well as the dynamic nature of decision-making tasks, the application of human-machine collaboration is mainly focused on routine and adaptive decisions, where AI only acts as a human assistant and make recommendations based on available data. With the increase of information incompleteness and information acquisition cost, innovative decision-making featuring high risk and uncertainty has becoming an unavoidable trend, which necessitates feasible and reasonable mechanisms at the organizational to help fully leverage machine capabilities and ensure innovative decision-making’s effectiveness within organization. The objective of this paper is to extract novel insights into the construction of three building blocks of of human-machine teams: authority seniority, dynamic task allocation, and appropriate accountability distribution, through two Case depth-study and semi-structured Interviews with employees in three companies of different types that commence introducing AI into complicated innovative decision-making activities.

A Usability Evaluation of Artificial Intelligence-Powered Physician Consultation during the COVID-19 pandemic

Aswini Misro (PGT, Warwick Business School) (online)

Keywords: Cancer Risk Assessment, Chatbot, Healthcare, Persuasive Technology, System Usability Scale, Usability Evaluation

The COVID-19 pandemic has resulted in a forced transition to telemedicine, where history-taking and clinical assessments are performed remotely during video or telephone consultations. While telemedicine has added to safety and social distancing during the pandemic, the manual and resource-intense process of telephone and video consultations has not helped to ease patient backlog but has added to this snowballing issue. We carried out a project using YouDiagnose, a persuasive technology designed to assist the process of breast cancer risk assessment. YouDiagnose intelligent automations automates the patient triage and clinical assessment using artificial intelligence technologies delivered through either a Chatbot or Smart Questionnaire. A usability evaluation was conducted with participants from the Patient and Public Involvement and Engagement Senate (PIES) of the Innovation Agency, United Kingdom. Qualitative feedback was obtained from the participants on both modalities and quantitative feedback by applying the System Usability Scale (SUS) to compare the usability of both interaction modalities. The SUS scores were analysed using the Adjective Rating Scale that revealed the Smart Questionnaire had “Good” usability and the Chatbot achieved “OK” usability. The evaluation highlighted the improvement in user experience and untapped potential of process automation and artificial intelligence in clinical services using persuasive technologies.

An Empirical Study of XAI explanations with different levels of interactivity

Zibin Zhao (PGR, Centre for Interdisciplinary Methodologies)

Keywords: Explainable artificial intelligence (XAI), Explanation interface design, Interactivity, Narrative visualisation, Co-design

In recent global public health emergencies, many machine learning (ML) and artificial intelligence (AI) systems have been applied to support the response to Covid-19 pandemics (Lalmuanawma et al., 2020). However, with the expanding application of AI algorithms and ML models in various disciplines, several potential risks and social ethics issues have sparked extensive debate. Among them, the most concerning problems are the opacity and uncertainty of the black-box models. Therefore, increasingly eXplainable Artificial Intelligence (XAI) techniques are being proposed to improve the interpretability, transparency, fairness, and accountability of black-box models (Mueller et al., 2019).

While several explainable (XAI) technologies have been proposed to enhance machine learning explainability, their practical utility is limited. An increasing number of nontechnical factors are considered when researching the application of XAI technologies. Some studies have discovered that interactive explanation interfaces aid end-users' understanding of the XAI explanations effectively (Schlegel et al., 2020; Alicioglu and Sun, 2022). But the extent to which interactivity impacts users' understanding is still little known. Also, not all interactions are helpful. To this end, this research is interested in how varying levels of interactivity within the explanation interface design may affect the applicability of explainable technology in ML-informed practices. We report on an experiment where we empirically studied how visual explanations of ML results with varying levels of interactivity could support participants from non-computing backgrounds in a machine-learning focused learning exercise. Five explanatory information presentation prototypes with differing levels of interactivity were co-designed with participants in the experiment. A think-aloud process was then used to test these prototypes. We preliminarily observed that setting the interaction parameters on the explanation interface is not suitable for all situations and non-expert users.

References:

Alicioglu G. and Sun B., (2022). A survey of visual analytics for Explainable Artificial Intelligence methods. Computers & Graphics, Volume 102, Pages 502-520. Available from: https://doi.org/10.1016/j.cag.2021.09.002

Lalmuanawma, S., Hussain, J., & Chhakchhuak, L. (2020). Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: A review. Chaos, solitons, and fractals, 139, 110059. Available from: https://doi.org/10.1016/j.chaos.2020.110059

Mueller S. T., Hoffman R. R., Clancey W., Emrey A., Klein G., (2019). Explanation in Human-AI Systems: A Literature Meta-Review Synopsis of Key Ideas and Publications and Bibliography for Explainable AI. DARPA XAI Literature Review: arXiv:1902.01876 [cs.AI] Available from: https://arxiv.org/abs/1902.01876

Schlegel U., Cakmak E., and Keim D. A., (2020). ModelSpeX: Model Specification Using Explainable Artificial Intelligence Methods. In: ARCHAMBAULT, Daniel, ed., and others. Machine Learning Methods in Visualisation for Big Data 2020. Genf: The Eurographics Association, pp. 7-11. ISBN 978-3-03868-113-7. Available from: https://d-nb.info/1212364147/34

Application Market Discourse Analysis: On the Joint-Construction of Cloud Discourse and Its Mechanism

Erkang Fu (PGT, Centre for Interdisciplinary Methodologies)

Keywords: Critical Discourse Analysis

The discourse of cloud computing, according to Vincent Mosco, has been constructed and promoted via multiple channels, including commercial advertising, blog post, social media campaign, and promotional research. This analysis aims to append this list with a more subtle and seemingly neutral form: the textual description in the App Store as a quasi-advertisement. Methods of Critical Discourse Analysis( CDA) and Corpus Linguistics(CL) would be applied to discover how cloud service providers construct the discourse of cloud service through the ostensibly neutral descriptions, constituting as quasi-advertisement in order to portray cloud service as ecologically friendly and personal-wise securely, which have been generally failed to achieve. By specific examination of topics such as ecology and privacy, we found that through repetitive occurring and deliberate omitting, a general discourse revealing significantly biased ideology is constructed from the top down.