Section 3: Designing Remote Qualitative Studies: Methods and Technologies
Section 3: Designing Remote Qualitative Studies: Methods and Technologies
Different methods of remote qualitative data collection can be implemented across a range of technologies. Decisions about which combinations of methods and technologies to offer participants requires researchers to have a broad understanding of the different options available, within the resources they have access to, as well as the way communication technologies are typically used (if at all) by their participant population.
The reduced costs of remote methods (Thunberg and Arnell, 2022) may make research possible that otherwise would not have been. However, priority needs to be given to the research question and the needs and priorities of the participant population when deciding which method is appropriate for a study. The factors below may assist researchers as they consider these questions:
Remote qualitative data collection methods can be used within a wide variety of research designs, including those with face-to-face components. There has been a rise in the use of ‘hybrid’ research designs- designs that bring together different configurations of remote and face-to-face data collection facilitated by use of both asynchronous and synchronous technologies and data types (audio, visual and text) (Horn & Casagrande, 2023).
Where remote longitudinal research designs are employed, researchers need to consider how they will keep participants engaged, particularly when there is no face-to-face data collection, and consider how they will embed rapport-building and maintenance across the research design (Weller, 2017).
The flexibility of remote methods, and removal of the stress of travel, can mean increased comfort for the participant, and potentially the researcher too. It has been suggested that this comfort can facilitate discussion of sensitive topic areas (Thunberg and Arnell, 2022; Alkhateeb, 2018; Sipes et al., 2019; Weller, 2017). The removal of travel time can also open up new windows of time for organising focus groups when several people need to be available at the same time (Gibson, 2017a; Keen et al., 2022). Similarly, remote methods of data collection can be easier to rearrange, or delay, than face-to-face data collection (where travel and venue may have been booked in advance) (Deakin & Wakefield, 2013). It has been suggested that this flexibility can mean that participants are more likely to cancel, drop out, or simply not turn up for remote data collection events compared to face-to-face (Self, 2021). However, this can be a benefit for research participants- when conducting research with people with disabilities or complex health needs, having clear and easy ways for them to rearrange, pause or withdraw from the research is particularly important, as noted by Budworth (2023):
When researching with participants who experience dynamic symptoms, fluctuating energy levels, and sudden changes in circumstances (i.e., unplanned hospital admissions and surgery), withdrawal rates can be understandably high.
(Budworth, 2023: 7)
Similarly, some may find it easier to take part in remote methods, e.g. if they have ‘caring responsibilities or commitments that can change at short notice’ or, if they do not have to ‘transfer though some kind of physical space’ to get there (time-saving, cost-saving), or can ‘weave it into their everyday life’ (source: consensus conference). Flexible research methods enable researchers to acknowledge and support this time rather than attempting to fit the participant into ‘normative time’ (Budworth, 2023).
The flexibility of remote methods also means that data collection can continue even if circumstances occur that would have otherwise resulted in cancellation (Budworth, 2023; Gibson, 2022). For example, busy healthcare professionals were able to take part in an ethnographic study using WhatsApp because of the great flexibility it offered (Humphries et al., 2022).
This interweaving of data collection into daily life and across longer periods of time, comes with a range of considerations for both researcher and participant. The portability of phones can mean that participants are sometimes multitasking when participating in research, are in transit, or are in otherwise disruptive environments which can limit their ability to engage. While it is important not to make neurotypical assumptions about how participants demonstrate attention, researchers have reported concerns when participants in remote data collection seem distracted. Hammond (2018) noted:
It was apparent during several online interviews that participants engaged in other activities. One man made cups of tea and changed from his work clothes into his casual clothes and another was chatting to others online at the same time…[…]…I was (naively) shocked that participants were doing other things whereas I was giving my full attention to the interview
(Hammond, 2018:7)
Indeed, there are examples in the literature of participants taking part in research whilst driving a car (Oliffe et al., 2021) and even a tractor (Epp et al., 2022). This multi-tasking can have both benefits and disbenefits; while on the one hand the great flexibility of remote methods supports inclusivity. On the other, distractions from the environment can pose significant challenges to engagement (Rahman et al., 2021), risks to confidentiality, not to mention also being potentially dangerous. While it may not be appropriate for researchers to pre-determine which contexts are appropriate for participation on behalf of their participants, when there are concerns around safety and/or illegality (e.g. a participant sending instant messages whilst driving), the researcher should suggest re-commencing data collection at another time and/or immediately halt the interaction.
As well as participants, researchers too may embrace the flexibility of remote to conduct research in unusual contexts, as noted by Budworth (2023):
On one occasion, I even completed an interview while sitting at the bedside of a family member who was sick in hospital. I was able to move around during the interview with pauses in interaction allowing me to make a cup of tea or occasionally return an urgent email. From some of the short silences between messages, I suspect that participants may have similarly been multi-tasking as is the norm for this generation
(Gibson, 2020: 617-618)
Multi-tasking is less likely to happen face-to-face where the boundaries of the research are clearer (Parkin et al., 2021; Lathen and Laestadius, 2021) and the everyday norms of communication are different.
Remote data collection methods using our definition occur in various ways:
The researcher may find that the flexibility of remote, whilst generating new opportunities, can also make it harder for them to ‘contain’ their field work and establish healthy work/life boundaries (Silverio et al., 2022). Indeed, the relative ease and speed of organising synchronous data collection events can lead to the temptation, particularly in time-constrained projects, to arrange them in quick succession or even back-to-back. With asynchronous remote methods, the researcher may feel that they are always ‘on call’ and obligated to respond to participants as soon as their responses are received, even if they fall outside usual working hours. This was highlighted by a remote ethnography conducted by Humphries et al. (2022) with health care professionals, the majority of whom responded to the researcher during their night shifts. As such, the ‘mental load’ of conducting research this way, and the infiltration of data collection into everyday life, needs to be considered at the outset and supportive measures put in place by Principal Investigators.
Remote methods span synchronous, ‘nearsynchronous’, and asynchronous interactions, i.e. they involve the navigation of a temporal dimension. This often involves some degree of flexibility around when data might be collected and how, and this may change over time in the course of a single data collection ‘event’ e.g. a participant in a synchronous online interview may want to give some thought to a particular question, which could result in a follow-up email, or an email interview which was intended to be asynchronous ends up being near synchronous due to rapid-fire responses. Although some studies are designed for immediate responses, where possible, it can be helpful to give participants choice over timing, i.e.– how and when to participate, as well as mode of interaction (Salmons, 2011; consensus conference).
Email interviews, for example, are amongst the most flexible type of interviews, and open up the research to participants across the globe. Offering email interviews and other asynchronous methods also allow people with busy lives and complex schedules to participate in research, e.g. adults with caring responsibilities and those with multiple jobs (Gibson, 2017a; Irani, 2019; Flynn et al., 2018). They are also less time pressured for researchers and participants (Gibson, 2017a). However, given that email interviews often take place over a longer period of time than synchronous methods, they are not well suited for research in evolving/rapidly changing scenarios. They are also tricky to maintain if they go on for a long time and participants are more likely to end or drift out of the data collection early (source: consensus conference). Instant messaging interviews can be a versatile alternative to email interviewing, and can be used in asynchronous, near-synchronous or synchronous ways to suit participant and project needs. (source: consensus conference)
It is important to remember that for all asynchronous methods, participants have time for reflection and editing, which can lead to more polished accounts of their experiences (source: interview with researcher; Cook, 2012) whereas self-editing will be more apparent in audio data as people re-articulate ideas for example (source: interview with researcher). The opposite argument has been made that busy participants may ‘write the first thing that comes into [their] head’ leading to less thoughtful reflections (source: consensus conference). The significance of these factors for analysis may depend on the underlying epistemology, and whether interview data is seen in a realist frame or as a co-constructed narrative between researcher and participant (and technology) (source: interview with researcher).
Focus groups and group interviews can also be conducted asynchronously (e.g. by WhatsApp/ Facebook Messenger). Facilitation of asynchronous remote focus groups, and ensuring everyone has a chance to speak, can be easier than in face-toface focus groups. For example, WhatsApp allows overlapping threads so that the links between the various contributions can be tracked. These can also be used to prompt non-dominant participants to express themselves and respond to what is said by any dominant participants. However, whilst supporting inclusivity, remote focus groups can make it harder for the researcher to sustain participant engagement, particularly participants who are multi-tasking (Chen and Neo, 2019; Lathen and Laestadius, 2021; Woodward et al., 2020). Textbased methods will also not be equally accessible to all. Neo et al. (2022), for example, found that typing speed could set the ‘pace’ of focus groups and had an impact on how participants could engage - both in the context of synchronous and asynchronous data collection. When synchronous, participants could get ‘left behind’ the conversation, and when asynchronous they could be put off by high volumes of messages on a thread for them to read before contributing.
In order to conduct a remote qualitative study, researchers need to decide which technology, or technologies, they will use to collect data. It is important that researchers adopt technologies that are best suited to their particular project and research question/s, but that also accommodate the needs and preferences of (would be) participants. Whilst technologies will continue to evolve over time, and researchers, institutions and ethics committees will need to continually adapt to their changing capabilities and ethical complexities, the broad principles which guide their selection remain broadly the same:
- There is evidence that using participants’ preferred technology and/or software/ applications to gather data supports inclusivity and participation rates and produces higher quality data (Enoch et al., 2023). Indeed, familiarity with the medium, and its pre-existing integration in a participant’s life have been identified as important factors in determining uptake of research invitations (particularly amongst older adults and underserved populations) (Sedgwick and Spiers, 2009; Ward et al., 2015). It has also been shown to support the development of rapport and enhance the participant’s experience of being involved in the study (Humphries et al., 2022; Sedgwick and Spiers, 2009). However, Section 3: Designing Remote Qualitative Studies: Methods and Technologies 21 these accommodations of participants’ preferences and abilities (and their facilitation of good quality data collection) have to be weighed against the requirements of institutions (Poliandri et al., 2023), funding bodies, ethics committees and relevant legislation (e.g. Gener Data Protection Regulation, 2016). Data security is a key responsibility of researchers, and whilst participants may already use a particular technology within their own lives, when repurposed for research, the attendant responsibilities and governance structures that accompany its use need to be carefully considered. However, this must be balanced against the imperative of social justice, to ensure that the outputs and benefits of health and social care research are more evenly distributed across social groups.
- Accessibility is a key consideration. It is important that researchers consult with advocacy groups, charities, community groups, as well as with would-be research participants directly to better understand and support their inclusion needs (Budworth, 2023; Waterhouse et al., 2022). The use of accessibility consultants and charities (e.g. AbilityNet, W3C) may also be an option, particularly for the remote recruitment of participants who are likely to have a range of access needs. AbilityNet and W3C have a wide range of free resources which outline solutions for access issues to the digital world. Researchers should allow sufficient time and funds to support the assessment and balancing of accessibility needs. Some of the concerns around remote data collection, such as ‘Zoom fatigue’ may be heightened for people with disabilities, e.g. neurodiverse people will often ‘mask’ their symptoms in social situations, such as data collection, which causes fatigue, so the use of pre-scheduled or participant-directed breaks, or the ability to switch to alternative technologies may be necessary (Yuruki & Inoue, 2023).
- Technology Deprivation. Access to technologies is restricted for certain social groups, and this needs to be accounted for within research designs. Underserved populations are more likely to live in ‘digital poverty’ which means they may not have email addresses for study documentation to be sent, or devices capable of accessing the necessary apps (such as WhatsApp) (source: consensus conference; Digital Poverty Alliance, 2022). Higher education levels and employment outside the home have conversely both been associated with greater access to email and technologies (Taylor, 2007). The demographic features of users of technologies will therefore significantly impact data produced, and these demographics will likely change over time along with changing technologies. Researchers need to keep up-todate with patterns of technology use, digital skills and access and be mindful of whose voices are excluded by technology choices.
- Ideally, participation should not rely on participants downloading new software/apps. Requiring this can exclude participants who do not have the necessary resources (e.g. data plan/ storage space/operating system), or necessary skills, for participation. MORE INFORMATION It has been suggested that this can be addressed by researchers providing IT support and equipment prior to data collection e.g. phones, signal boosters, data credit vouchers etc. (for examples of studies where this support was provided see: Banbury et al., 2020; Carter et al., 2021b; Dayha et al., 2023). ‘Trying out’ the technology with each participant prior to data collection can help build rapport as well as solving any technical issues (Thunberg and Arnell, 2022). MORE INFORMATION This would need to be planned into the research process in advance.
- Researchers need to be up-to-date with the communication technologies MORE INFORMATION in use among their intended participant population group (source: consensus conference); (Humphries et al., 2022), as well as the norms of communication/etiquette typically used on that platform. It is also important not to homogenise participant groups by assuming that everyone from that group will want, or be able to use, the same remote method.
- Researchers should carefully consider the unique features of the technologies they want to use. The ‘chat’ function of videoconferencing platforms, for example, can be useful for asking questions without disrupting flow within fastmoving focus groups. Giving participants access to these non-threatening spaces to ask clarifying questions or make their contribution is an important aspect of inclusivity (Chen and Neo, 2019). With videoconferencing platforms, the option for different forms of communication to occur simultaneously (audio/video/chat/reaction Section 3: Designing Remote Qualitative Studies: Methods and Technologies 22 emoji) allows data to be captured that might otherwise be lost because of social norms about turn taking (source: interview with researcher). Similarly, the overlapping threads of WhatsApp and the ability to see when a message has been read (even if not responded to) can be useful in asynchronous data collection (Humphries et al., 2022), and the timed disappearance of Snapchat messages can make use of the application for research participation less threatening. Indeed, the capacity for privacy may be the more important mediating factor for participation and disclosure than the remote technology per se (source: consensus conference). MORE INFORMATION New software is also emerging that has been specifically designed to gather remote qualitative data, such as itracks (which offers new features- such as a ‘back room’ for unobtrusive observation of focus groups, or facilitating completely anonymous text-based focus groups), Qualmeeting and Discuss. However, research participants are highly unlikely to be familiar with research-focused platforms and may be wary of using them. Researchers wanting to use research-focused platforms will need to invest time in assisting participants to use them, which may need to be face-to-face, depending on participant group (source: consensus conference).
Researchers need to consider at the design stage the format, quantity and depth of qualitative data that a particular technology is capable of facilitating, and how this will impact methods of analysis.
Data produced through the use of remote methods can be similar to that generated by face-toface, but there are also features that are unique to remote methods. Written text (e.g. through email or text message data collection etc.) can be very different to verbatim text in terms of its cohesion and readability. It may also include typos, emojis, particular uses of grammar to facilitate understanding (e.g. multiple exclamation or question marks) and different stylistics (font, underlining, italics) absent in verbatim text. Some remote spaces (e.g. chat rooms) and social groups (e.g. young people) have their own community etiquette, vernacular and communication norms (e.g. acronyms or ‘textese’/ideograms/ memes) that the researcher may need to become acclimatised to in order to make sense of the data (source: consensus conference). Researchers may need to seek clarification during data collection (Hammond, 2018), as well as consider whether, and indeed how, they will incorporate these language features into their analyses. Approaches are emerging exploring the intersection of emojis with language in the creation of meaning (Logi and Zappavigna, 2021; Halverson et al., 2023; Westbrook, 2023) and qualitative software such as MAXQDA can accommodate emojis in analysis. Research has pointed to their use as a way of compensating for a lack of visual cues in this type of communication, as a means of reducing ambiguity in interpretation (Halverson et al., 2023) and also enhancing crosscultural communication (Alshenqeeti, 2016), however the evidence is conflicting and may be highly dependent on the social background of the participant(s) (Kimura-Thollander and Kumar, 2019; Bresciani and Eppler, 2015).
Researchers may also need to consider whether, and how, any fieldnotes will be incorporated into the analysis. These field notes may include reflections on the establishment of rapport, whether or not there were any other (non-participant) people present during data collection, descriptions of the physical environment the participant is in (video conferencing) and any evidence of distractions (e.g. participants scrolling online during data collection, noises in the background, doorbell). Whilst these factors may assist in the interpretation of the resulting data, whether or not they will be used in this way needs to be clear during the consent process, particularly if the taking of field notes is not visible to the participant.
As well as introducing new forms of data, remote data collection can also bring challenges to the overall coherence of data. Text-based asynchronous focus groups, for example, can sometimes involve large gaps between responses (due to participant availability or access to network coverage) which can make discussion threads hard to follow, especially as some apps (e.g. WhatsApp) do not include the links between threads (i.e. which previous text a participant is responding to) when the chat is exported for analysis (Singer et al., 2023). In longitudinal research, the technologies used to collect data at different timepoints may change or switch between face-to-face and remote (e.g. Weller, 2017) and need to be considered at the interpretive stage of the analysis.
In addition, it is possible that participants will use the technology in ways other than the researcher intended (Singer et al., 2023). Use of ‘disappearing messages’ (i.e. those that are ‘disappear’ immediately after they have been read), or the ‘voice note’ function in the context of a textbased interview or focus group can mean that the researcher receives forms of data in different formats than they had anticipated, which can impact their analytic approach given the significant differences between verbatim and written text. Furthermore, some languages have marked differences between written and colloquial formats, as well as local dialectic differences. This can result in very different data when gathered using verbal or text-based technologies and data gathered across geographical regions (Douedari et al., 2021). Finally, researchers should consider the quantity and depth of data required to undertake their approach to analysis, as well as the timeframe for generating it. Text-based asynchronous methods, such as email for example, can be challenging for a researcher wishing to undertake a grounded theory approach (whereby sampling is informed by emerging analyses), due to the timeframes involved in generating insightful data. However, as noted by Fritz and Vandermause (2018), the lengthy nature of email interviews can also mean that high quality data emerge as participants have scope to carefully craft their accounts.. There are, however, instances where generating a large quantity of data per participant is inconsistent with the study’s aims and design, or where ‘polished’ accounts mask complexity and nuance. In the context of mixed methods research as well, shorter, focused answers across larger numbers of participants can aid data transformation and/or its integration with quantitative data (Griffiths et al., 2014; Boardman et al., 2011).
Overall, there is a need to explore the needs and preferences of the social group being studied, the context of the research, the impacts of intersectionality and also the researcher’s own positionality when considering which technology to use in remote qualitative data collection.
Offering a range of methods for participants to choose from (including both face-to-face options as well as remote as far as is possible) represents a political commitment to the empowerment of participants, particularly those who are most sensitive to power differentials due to legacies of social and political oppression (Budworth, 2023; Jackson et al., 2023; Ślęzak, 2023). Whilst methodological choices will ultimately be shaped by the research question(s), resources, ethical considerations, institutional and legal regulations and pragmatic considerations, such adaptive hybrid research designs are the most inclusive and can flex to meet participant and researcher needs (Mirick and Wladkowski, 2019).
-
- How much do you know about how your potential participants communicate digitally? How does this vary across your population of interest? How could you engage with
- What is the likely pattern of life for your potential participants – when will they be able to make time for responding to you?
- How important is synchronicity within your research design?
- How/can technologies, platforms and applications be used to support inclusivity for your particular research design?
- What is your optimal balance between the benefits of offering choice of digital modality to participants and the challenges of analysing data collected using a range of modalities?
How to cite the guidance
Boardman, F., Roberts, J., Clark, C., Onuegbu, C., Harris, B., Seers, K., Staniszewska, S., Aktas, P., Griffiths, F. 2024. Qualitative Remote Data Collection Guidance. Coventry: University of Warwick Press. Available from here: https://doi.org/10.31273/9781911675174