Skip to main content Skip to navigation

I:DNA - Evaluation

Background

I:DNA is a Wellcome Trust funded project that was created to engage the public with research led by Professor Felicity Boardman, in conjunction with Dr Corinna Clark, at Warwick Medical School. The research explores the experiences of people living with inherited conditions and their attitudes towards genetic medicine.

This video provides an overview of the art installation produced to explore these themes. You can read more about the project as a whole here.

Evaluation:

We embedded various means of evaluation throughout the development and implementation of I:DNA, including: feedback postcards, online feedback forms, email, in-person interviews, informal discussions at events, and ad hoc collection of visitor comments by the research assistant and the curators at Leamington Spa Art Gallery.

We discussed other modes of evaluation during the development phase, for example, using a feedback wall on which visitors could write comments, or digital in-person feedback via tablets, however, venue and covid-19 restrictions, as well as other practicalities (e.g., cost, transport), precluded these.

Whilst we made every effort to ensure that I:DNA could be extensively evaluated, we faced many barriers to the collection of evaluation data. For example, the covid pandemic prevented us leaving feedback cards and pens at Leamington Spa Art Gallery & Museum. The only option available to us was to display a QR code near the installation, which linked to an online feedback form for audiences to complete on their own phones/devices. In practice, however, we found that this option was rarely used by visitors. This may have been due to a range of factors: the lack of a device to leave feedback with, the inconvenience of having to go online, a reluctance to use mobile phones in the gallery setting, and/or because the QR code was not displayed prominently enough (the location of the QR code was determined by the Gallery). As I:DNA was exhibited for nearly nine months at this venue, and was curated by gallery staff rather than our research assistant, we were reliant on these staff collecting and relaying any in-person comments and feedback. Due to staffing shortages at the Gallery, they were unable to collate and send this information until several months after the installation had been removed. As such, the context and site of exhibit can have a significant impact on the capacity to evaluate engagement and impact.

In general, we found that having the installation curated by our research assistant, whose specific role was to engage with visitors, had a significant positive impact on the collection of evaluation data. Indeed, at venues where the research assistant was present, we received much higher numbers of visitor feedback cards than those where curation was done by venue staff, or the installation was un-manned (see our paper for details and feedback numbers). Research staff curation of the installation was, however, the most costly option, and the location and duration of the exhibition were significant factors in determining whether this was feasible.

In hindsight, non-curated events would perhaps have received higher levels of feedback if we had employed more engaging or creative methods of evaluation (e.g. a feedback wall), or something that allowed for immediate responses with little effort, such as a method for visitors to record spoken or written feedback (e.g. using tablets fixed to pedestals next to the exhibit). For online events, real-time feedback polls at the end of each event might have garnered higher response rates, as post-event evaluations had poor uptake.

We have used the outcome of our evaluations in a number of ways. Constant critical reflection within the team and with collaborators directly impacted on plans for the rest of the tour. For example, we identified that young people and children were under-represented within our audiences, and so we devised an engagement activity specifically targeted at them. Evaluation data also impacted the practical display of the installation, for example, the volume of the soundscape. This ensured that our evaluative processes fed into the creative process in an iterative way.

Following completion of the physical I:DNA tour, we compiled the various sources of evaluation and creative outputs (including children’s artworks, poetry) and conducted a thematic analysis for our publication. This analysis was informed by the three key areas of PE impact1 - changing views, inspiring behaviour change, and supporting capacity for future PE activity. We demonstrated evidence of impact in all three areas, with for example, audience members acknowledging that they gained new knowledge and perspectives.

“I've never really thought about the ethics of screening before, it makes me wonder what will come next…what will we start screening for?” (Visitor I:DNA, Oxford Science and Ideas Festival)

We did also identify areas that we could not capture, such as long-term impact and the process by which the audience/participants internally translated the research into their own interpretations, and their own artwork (poetry and children’s art display).

Our experiences highlighted the trade-offs inherent in public engagement evaluation between capturing the various types of measures of evaluation and impact, with the numerous practical (e.g., resources) and venue limitations, as well considering the inclination of audience members to provide this information and their preferences for doing so.

1Ball et al, 2021. Arts-based approaches to public engagement with research: Lessons from a rapid review. Santa Monica, CA: RAND Corporation, 2021. www.rand.org/pubs/research_reports/RRA194-1.html.