Sample Size Policy for Qualitative Studies Using In-Depth Interviews

  • Published: 12 September 2012
  • Volume 41 , pages 1319–1320, ( 2012 )

Cite this article

the number of participants in qualitative research

  • Shari L. Dworkin 1  

296k Accesses

565 Citations

28 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

In recent years, there has been an increase in submissions to the Journal that draw on qualitative research methods. This increase is welcome and indicates not only the interdisciplinarity embraced by the Journal (Zucker, 2002 ) but also its commitment to a wide array of methodologies.

For those who do select qualitative methods and use grounded theory and in-depth interviews in particular, there appear to be a lot of questions that authors have had recently about how to write a rigorous Method section. This topic will be addressed in a subsequent Editorial. At this time, however, the most common question we receive is: “How large does my sample size have to be?” and hence I would like to take this opportunity to answer this question by discussing relevant debates and then the policy of the Archives of Sexual Behavior . Footnote 1

The sample size used in qualitative research methods is often smaller than that used in quantitative research methods. This is because qualitative research methods are often concerned with garnering an in-depth understanding of a phenomenon or are focused on meaning (and heterogeneities in meaning )—which are often centered on the how and why of a particular issue, process, situation, subculture, scene or set of social interactions. In-depth interview work is not as concerned with making generalizations to a larger population of interest and does not tend to rely on hypothesis testing but rather is more inductive and emergent in its process. As such, the aim of grounded theory and in-depth interviews is to create “categories from the data and then to analyze relationships between categories” while attending to how the “lived experience” of research participants can be understood (Charmaz, 1990 , p. 1162).

There are several debates concerning what sample size is the right size for such endeavors. Most scholars argue that the concept of saturation is the most important factor to think about when mulling over sample size decisions in qualitative research (Mason, 2010 ). Saturation is defined by many as the point at which the data collection process no longer offers any new or relevant data. Another way to state this is that conceptual categories in a research project can be considered saturated “when gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories” (Charmaz, 2006 , p. 113). Saturation depends on many factors and not all of them are under the researcher’s control. Some of these include: How homogenous or heterogeneous is the population being studied? What are the selection criteria? How much money is in the budget to carry out the study? Are there key stratifiers (e.g., conceptual, demographic) that are critical for an in-depth understanding of the topic being examined? What is the timeline that the researcher faces? How experienced is the researcher in being able to even determine when she or he has actually reached saturation (Charmaz, 2006 )? Is the author carrying out theoretical sampling and is, therefore, concerned with ensuring depth on relevant concepts and examining a range of concepts and characteristics that are deemed critical for emergent findings (Glaser & Strauss, 1967 ; Strauss & Corbin, 1994 , 2007 )?

While some experts in qualitative research avoid the topic of “how many” interviews “are enough,” there is indeed variability in what is suggested as a minimum. An extremely large number of articles, book chapters, and books recommend guidance and suggest anywhere from 5 to 50 participants as adequate. All of these pieces of work engage in nuanced debates when responding to the question of “how many” and frequently respond with a vague (and, actually, reasonable) “it depends.” Numerous factors are said to be important, including “the quality of data, the scope of the study, the nature of the topic, the amount of useful information obtained from each participant, the use of shadowed data, and the qualitative method and study designed used” (Morse, 2000 , p. 1). Others argue that the “how many” question can be the wrong question and that the rigor of the method “depends upon developing the range of relevant conceptual categories, saturating (filling, supporting, and providing repeated evidence for) those categories,” and fully explaining the data (Charmaz, 1990 ). Indeed, there have been countless conferences and conference sessions on these debates, reports written, and myriad publications are available as well (for a compilation of debates, see Baker & Edwards, 2012 ).

Taking all of these perspectives into account, the Archives of Sexual Behavior is putting forward a policy for authors in order to have more clarity on what is expected in terms of sample size for studies drawing on grounded theory and in-depth interviews. The policy of the Archives of Sexual Behavior will be that it adheres to the recommendation that 25–30 participants is the minimum sample size required to reach saturation and redundancy in grounded theory studies that use in-depth interviews. This number is considered adequate for publications in journals because it (1) may allow for thorough examination of the characteristics that address the research questions and to distinguish conceptual categories of interest, (2) maximizes the possibility that enough data have been collected to clarify relationships between conceptual categories and identify variation in processes, and (3) maximizes the chances that negative cases and hypothetical negative cases have been explored in the data (Charmaz, 2006 ; Morse, 1994 , 1995 ).

The Journal does not want to paradoxically and rigidly quantify sample size when the endeavor at hand is qualitative in nature and the debates on this matter are complex. However, we are providing this practical guidance. We want to ensure that more of our submissions have an adequate sample size so as to get closer to reaching the goal of saturation and redundancy across relevant characteristics and concepts. The current recommendation that is being put forward does not include any comment on other qualitative methodologies, such as content and textual analysis, participant observation, focus groups, case studies, clinical cases or mixed quantitative–qualitative methods. The current recommendation also does not apply to phenomenological studies or life history approaches. The current guidance is intended to offer one clear and consistent standard for research projects that use grounded theory and draw on in-depth interviews.

Editor’s note: Dr. Dworkin is an Associate Editor of the Journal and is responsible for qualitative submissions.

Baker, S. E., & Edwards, R. (2012). How many qualitative interviews is enough? National Center for Research Methods. Available at: http://eprints.ncrm.ac.uk/2273/ .

Charmaz, K. (1990). ‘Discovering’ chronic illness: Using grounded theory. Social Science and Medicine, 30 , 1161–1172.

Article   PubMed   Google Scholar  

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . London: Sage Publications.

Google Scholar  

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine Publishing Co.

Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research, 11 (3) [Article No. 8].

Morse, J. M. (1994). Designing funded qualitative research. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research (pp. 220–235). Thousand Oaks, CA: Sage Publications.

Morse, J. M. (1995). The significance of saturation. Qualitative Health Research, 5 , 147–149.

Article   Google Scholar  

Morse, J. M. (2000). Determining sample size. Qualitative Health Research, 10 , 3–5.

Strauss, A. L., & Corbin, J. M. (1994). Grounded theory methodology. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 273–285). Thousand Oaks, CA: Sage Publications.

Strauss, A. L., & Corbin, J. M. (2007). Basics of qualitative research: Techniques and procedures for developing grounded theory . Thousand Oaks, CA: Sage Publications.

Zucker, K. J. (2002). From the Editor’s desk: Receiving the torch in the era of sexology’s renaissance. Archives of Sexual Behavior, 31 , 1–6.

Download references

Author information

Authors and affiliations.

Department of Social and Behavioral Sciences, University of California at San Francisco, 3333 California St., LHTS #455, San Francisco, CA, 94118, USA

Shari L. Dworkin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shari L. Dworkin .

Rights and permissions

Reprints and permissions

About this article

Dworkin, S.L. Sample Size Policy for Qualitative Studies Using In-Depth Interviews. Arch Sex Behav 41 , 1319–1320 (2012). https://doi.org/10.1007/s10508-012-0016-6

Download citation

Published : 12 September 2012

Issue Date : December 2012

DOI : https://doi.org/10.1007/s10508-012-0016-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

How many participants do I need for qualitative research?

  • Participant recruitment
  • Qualitative research

6 min read David Renwick

the number of participants in qualitative research

For those new to the qualitative research space, there’s one question that’s usually pretty tough to figure out, and that’s the question of how many participants to include in a study. Regardless of whether it’s research as part of the discovery phase for a new product, or perhaps an in-depth canvas of the users of an existing service, researchers can often find it difficult to agree on the numbers. So is there an easy answer? Let’s find out.

Here, we’ll look into the right number of participants for qualitative research studies. If you want to know about participants for quantitative research, read Nielsen Norman Group’s article .

Getting the numbers right

So you need to run a series of user interviews or usability tests and aren’t sure exactly how many people you should reach out to. It can be a tricky situation – especially for those without much experience. Do you test a small selection of 1 or 2 people to make the recruitment process easier? Or, do you go big and test with a series of 10 people over the course of a month? The answer lies somewhere in between.

It’s often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you’re new to user research or you’re working in a new area.

What you’re actually looking for here is what’s known as saturation.

Understanding saturation

Whether it’s qualitative research as part of a master’s thesis or as research for a new online dating app, saturation is the best metric you can use to identify when you’ve hit the right number of participants.

In a nutshell, saturation is when you’ve reached the point where adding further participants doesn’t give you any further insights. It’s true that you may still pick up on the occasional interesting detail, but all of your big revelations and learnings have come and gone. A good measure is to sit down after each session with a participant and analyze the number of new insights you’ve noted down.

Interestingly, in a paper titled How Many Interviews Are Enough? , authors Greg Guest, Arwen Bunce and Laura Johnson noted that saturation usually occurs with around 12 participants in homogeneous groups (meaning people in the same role at an organization, for example). However, carrying out ethnographic research on a larger domain with a diverse set of participants will almost certainly require a larger sample.

Ensuring you’ve hit the right number of participants

How do you know when you’ve reached saturation point? You have to keep conducting interviews or usability tests until you’re no longer uncovering new insights or concepts.

While this may seem to run counter to the idea of just gathering as much data from as many people as possible, there’s a strong case for focusing on a smaller group of participants. In The logic of small samples in interview-based , authors Mira Crouch and Heather McKenzie note that using fewer than 20 participants during a qualitative research study will result in better data. Why? With a smaller group, it’s easier for you (the researcher) to build strong close relationships with your participants, which in turn leads to more natural conversations and better data.

There’s also a school of thought that you should interview 5 or so people per persona. For example, if you’re working in a company that has well-defined personas, you might want to use those as a basis for your study, and then you would interview 5 people based on each persona. This maybe worth considering or particularly important when you have a product that has very distinct user groups (e.g. students and staff, teachers and parents etc).

How your domain affects sample size

The scope of the topic you’re researching will change the amount of information you’ll need to gather before you’ve hit the saturation point. Your topic is also commonly referred to as the domain.

If you’re working in quite a confined domain, for example, a single screen of a mobile app or a very specific scenario, you’ll likely find interviews with 5 participants to be perfectly fine. Moving into more complicated domains, like the entire checkout process for an online shopping app, will push up your sample size.

As Mitchel Seaman notes : “Exploring a big issue like young peoples’ opinions about healthcare coverage, a broad emotional issue like postmarital sexuality, or a poorly-understood domain for your team like mobile device use in another country can drastically increase the number of interviews you’ll want to conduct.”

In-person or remote

Does the location of your participants change the number you need for qualitative user research? Well, not really – but there are other factors to consider.

  • Budget: If you choose to conduct remote interviews/usability tests, you’ll likely find you’ve got lower costs as you won’t need to travel to your participants or have them travel to you. This also affects…
  • Participant access: Remote qualitative research can be a lifesaver when it comes to participant access. No longer are you confined to the people you have physical access to — instead you can reach out to anyone you’d like.
  • Quality: On the other hand, remote research does have its downsides. For one, you’ll likely find you’re not able to build the same kinds of relationships over the internet or phone as those in person, which in turn means you never quite get the same level of insights.

Is there value in outsourcing recruitment?

Recruitment is understandably an intensive logistical exercise with many moving parts. If you’ve ever had to recruit people for a study before, you’ll understand the need for long lead times (to ensure you have enough participants for the project) and the countless long email chains as you discuss suitable times.

Outsourcing your participant recruitment is just one way to lighten the logistical load during your research. Instead of having to go out and look for participants, you have them essentially delivered to you in the right number and with the right attributes.

We’ve got one such service at Optimal Workshop, which means it’s the perfect accompaniment if you’re also using our platform of UX tools. Read more about that here .

So that’s really most of what there is to know about participant recruitment in a qualitative research context. As we said at the start, while it can appear quite tricky to figure out exactly how many people you need to recruit, it’s actually not all that difficult in reality.

Overall, the number of participants you need for your qualitative research can depend on your project among other factors. It’s important to keep saturation in mind, as well as the locale of participants. You also need to get the most you can out of what’s available to you. Remember: Some research is better than none!

Capture, analyze and visualize your qualitative data.

Try our qualitative research tool for usability testing, interviewing and note-taking. Reframer by Optimal Workshop.

the number of participants in qualitative research

Published on August 8, 2019

the number of participants in qualitative research

David Renwick

David is Optimal Workshop's Content Strategist and Editor of CRUX. You can usually find him alongside one of the office dogs 🐕 (Bella, Bowie, Frida, Tana or Steezy). Connect with him on LinkedIn.

Recommended for you

the number of participants in qualitative research

Collating your user testing notes

Here's a step by step guide for turning your user testing notes into something useful.

the number of participants in qualitative research

Using the 'narrative arc' in your user interviews

If you’re more of a visual person, you can watch a 20 minute talk which explains how to use the narrative arc in your in-person research. The power of stories Stories are powerful things. You don’t need me to tell you that! You’ve probably read a book, seen a play, a film or a TV... View Article

the number of participants in qualitative research

9 tips to improve your note-taking skills

Note-taking for qualitative research is a hard skill to master, here are 9 tips to help you become a better note-taker.

Try Optimal Workshop tools for free

What are you looking for.

Explore all tags

Discover more from Optimal Workshop

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

28k Accesses

48 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

the number of participants in qualitative research

Interviews in the social sciences

the number of participants in qualitative research

Professionalism in dentistry: deconstructing common terminology

A review of technical and quality assessment considerations of audio-visual and web-conferencing focus groups in qualitative health research, introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions.

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

‘Baby mamas’ in Urban Ghana: an exploratory qualitative study on the factors influencing serial fathering among men in Accra, Ghana

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Revolutionising dental technologies: a qualitative study on dental technicians’ perceptions of Artificial intelligence integration

  • Galvin Sim Siang Lin
  • Yook Shiang Ng
  • Kah Hoay Chua

BMC Oral Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

the number of participants in qualitative research

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Why 5 participants are okay in a qualitative study, but not in a quantitative one.

the number of participants in qualitative research

July 11, 2021 2021-07-11

  • Email article
  • Share on LinkedIn
  • Share on Twitter

In our quantitative-usability classes ( Measuring UX and ROI and Statistics for UX ) we often recommend a sizeable number of participants for quantitative studies — usually more than 30. We’ve said again and again that metrics collected in qualitative usability testing are often misleading and do not generalize to the general population. (There could be exceptions, but you always need to check by calculating confidence intervals and statistical significance ). And, almost inevitably, the retort comes back — Didn’t Jakob Nielsen recommend 5 users for usability studies ? If you need more users for statistical reasons, then it certainly means that the results obtained with 5 users aren’t valid, doesn’t it?

This question is so frequent, that we need to address the misunderstanding.

In This Article:

Quantitative usability studies: more than 5 participants, qualitative usability studies: assumptions behind the 5-user guideline, questioning the assumptions behind the 5-user guideline.

Quantitative usability studies are usually summative in nature: their goal is to measure the usability of a system (site, application, or some other product), arriving at one or more numbers. These studies attempt to get a sense of how good an interface is for its users by looking at a variety of metrics: how many users from the general population can complete one or more top tasks, how long it takes them, how many errors they make, and how satisfied they are with their experience. They usually involve collecting values for each participant, aggregating those values in summary statistics such as averages or success rates, calculating confidence intervals for those aggregates, and reporting likely ranges for the true score for the whole population. The results of such a study may indicate that the success rate for a top task for the whole population is somewhere between 75% and 90%, with a 95% confidence level and that the task time is between 2.3 and 2.6 minutes. These ranges (in effect, confidence intervals) should be fairly narrow to convey any interesting information (knowing that a success rate is between 5% and 95% is not very helpful, is it?), and they usually are narrow only if you include a large number of participants (40 or more).  Hence, the recommendation to calculate confidence intervals for all metrics collected and not to rely on summary statistics when studies contain just a few users.

In contrast, qualitative user studies are mostly formative : their goal is to figure out what doesn’t work in a design, fix it, and then move on with a new, better version. The new version will usually also get tested, improved on, and so forth. While it is possible to have qualitative studies that have summative goals (let’s see all that’s wrong with our current website!), a lot of the times they simply aim to refine an existing design iteration . Qualitative studies (even when they are summative) do not try to predict how many users will complete a task, nor do they attempt to figure out how many people will run into any specific usability issue. They are meant to identify usability problems.

In comes Jakob Nielsen’s article that recommends qualitative testing with 5 users. There are three main assumptions behind that recommendation:

  • That you are trying to identify issues in a design . By definition, an issue is some usability problem that the user experiences while using the design.
  • That any issue that somebody encounters is a valid one worth fixing.  To make an  analogy for this assumption : if one person falls into a pothole, you know you need to fix it. You don’t need 100 people to fall into it to decide it needs fixing.
  • That the probability of someone encountering an issue is 31%

Based on these assumptions, Jakob Nielsen and Tom Landauer built a mathematical model that shows that, by doing a qualitative test with 5 participants, you will identify 85% of the issues in an interface. And Jakob Nielsen has repeatedly argued (and justly so) that a good investment is to start with 5 people, find your 85% of the issues, fix them, then test again with another 5 people, and so on. It’s not worth trying to find all the issues in one test because you’ll spend too much time and money, and then you’ll be sure to introduce other issues in the redesign.

Note that the “metrics” collected in the quantitative and qualitative studies are very different: in quantitative studies you’re interested in how your general population will fare on measures such as task success, errors, satisfaction, and task time . In qualitative studies, you’re simply counting usability issues . And, while there is a statistical uncertainty about any number obtained from a quantitative study (how will the average obtained from my study compare with the average of the general population), there is absolutely no uncertainty in a qualitative study — any error identified is a legit problem that needs to be fixed.

I gave you a list of assumptions on which the 5-user guideline is based. However, you may not agree with (some of) them. I don’t think there’s much to argue about the first assumption, but you may bring some valid objections to the second and third one.

Does any error that someone encounters need to be fixed? One may argue that if 1000 out of a 1000 people fall into a pothole, you do need to repair it, but not if only one person out of 1000 falls into it. With a qualitative usability study, you have no guarantee (based on only the study) that an identified issue is likely to be encountered by more users than the ones that happened to come to your study. So, in that sense, the results cannot be generalized to the whole population.

Yes, if you wanted you could run a quantitative study to predict how many people in the general population are likely to encounter a particular error. And, then, yes, you could prioritize errors based on how likely they are and fix the ones with the highest priority. While that approach is certainly very sound, it’s probably going to be also very wasteful — you will need to test your design with a pretty large number of users to identify its main problems, then fix them, and introduce another ones that will need to be identified and prioritized.

Instead, the qualitative approach assumes that designers will use some other means to prioritize among different issues — maybe some of them are too expensive to fix or others are related to a functionality that only few of your users are likely to use. Qualitative user testing simply gives you a list of problems . It is the researcher job’s to prioritize among the different issues and move on.

Is the chance of encountering a problem in an interface 31%? The 31% number was based on an average across several projects run in the early 90s. It is possible that, since then, the chance of encountering an issue has changed. It’s also possible that, as you’re doing more design iterations and fixing more and more errors, the usability of your product is substantially better and, in fact, it’s more difficult to encounter new issues.

The good news is that the chance of encountering an error in an interface is only a parameter in Nielsen and Landauer’s model. So, if you know that your interface is pretty good, you can simply insert your desired probability in that model. The number of users will be given by this equation:

N = log (0.15)/log (1-L)

where L is your estimated probability of encountering an error in an interface, expressed as a decimal (i.e., 31% is entered as .31)

For example, if L is 20%, you would need 9 users to find 85% of the problems in the interface. If L is 10%, then you’d need 18 users. The more usable your interface is, the more users you need to include in the test to identify 85% of the usability problems.

However, your real goal is not to find a particular percentage of problems, but to maximize the business value of your user-research program . It turns out that peak ROI is fairly insensitive to variations in model parameters. If you are testing a terrible design for the very first time, your expenses will be low (it will be very easy to identify usability problems) and your gains will be high (the product will be hugely improved). Conversely, if you are researching a difficult problem, your expenses will be higher and your gains will be lower. However, the point that maximizes the ratio between gains and expenses (i.e., ROI) will still usually be around 5 test users, even though your study profitability will be higher for the easy study and lower for the harder study.

In general, it’s a good idea to start with 5 users, fix the errors that you find, and then slowly increase the number of users on further iterations if you think that you’ve made great progress. But, in practice, you can easily get a sense of how much insight you’ve found with 5 users. If you feel that not much, by all means, include a few additional users. Conversely, you can test with fewer than 5 users under other circumstances, such as when you can proceed to testing the next iteration very quickly. But if you have plenty of issues that you need to work on, first fix those, then move on.

There is no contradiction between the 5-user guideline for qualitative user testing and the idea that you cannot trust metrics obtained from small studies , because you do not collect metrics in a qualitative study. Quantitative and qualitative user studies have different goals :

  • Quantitative studies aim to find metrics that predict the behavior of the whole population; such numbers will be imprecise — and thus useless — if they are based on a small sample size.
  • Qualitative studies aim for insights : to identify usability issues in an interface. Researchers must use judgment rather than numbers to prioritize these issues. (And, to hammer home the point: the 5-user guideline only applies to qualitative, not to quantitative studies.]

If your interface has already gone through many rounds of testing, you may need to include more people even in a qualitative test, as the chance of encountering a problem may be smaller than the original assumptions of the model. Still, it’s good practice to start with 5 users and then increase the number if there are too few important findings.

Related Courses

Measuring ux and roi.

Use metrics from quantitative research to demonstrate value

How to Interpret UX Numbers: Statistics for UX

When research data should be trusted; what statistics to use when

Analytics and User Experience

Study your users’ real-life behaviors and make data-informed design decisions

Related Topics

  • Research Methods Research Methods

Learn More:

the number of participants in qualitative research

Frequency ≠ Importance in Qualitative Data

Tanner Kohler · 3 min

the number of participants in qualitative research

Focus Groups 101

Therese Fessenden · 4 min

the number of participants in qualitative research

Deductively Analyzing Qualitative Data

Related Articles:

Internal vs. External Validity of UX Studies

Raluca Budiu · 9 min

UX Research Methods: Glossary

Raluca Budiu · 12 min

Quantifying UX Improvements: A Case Study

Kate Moran · 6 min

Benchmarking UX: Tracking Metrics

Kate Moran · 3 min

Quantitative Research: Study Guide

Kate Moran · 8 min

Quantitative UX: Glossary

Raluca Budiu · 8 min

Qualitative Data Coding

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Coding is the process of analyzing qualitative data (usually text) by assigning labels (codes) to chunks of data that capture their essence or meaning. It allows you to condense, organize and interpret your data.

A code is a word or brief phrase that captures the essence of why you think a particular bit of data may be useful. A good analogy is that a code describes data like a hashtag describes a tweet.

qualitative coding

Coding is an iterative process, with researchers refining and revising their codes as their understanding of the data evolves.

The ultimate goal is to develop a coherent and meaningful coding scheme that captures the richness and complexity of the participants’ experiences and helps answer the research questions.

Step 1: Familiarize yourself with the data

  • Read through your data (interview transcripts, field notes, documents, etc.) several times. This process is called immersion.
  • Think and reflect on what may be important in the data before making any firm decisions about ideas, or potential patterns.

Step 2: Decide on your coding approach

  • Will you use predefined deductive codes (based on theory or prior research), or let codes emerge from the data (inductive coding)?
  • Will a piece of data have one code or multiple?
  • Will you code everything or selectively? Broader research questions may warrant coding more comprehensively.

If you decide not to code everything, it’s crucial to:

  • Have clear criteria for what you will and won’t code
  • Be transparent about your selection process in research reports
  • Remain open to revisiting uncoded data later in analysis

Step 3: Do a first round of coding

  • Go through the data and assign initial codes to chunks that stand out
  • Create a code name (a word or short phrase) that captures the essence of each chunk
  • Keep a codebook – a list of your codes with descriptions or definitions
  • Be open to adding, revising or combining codes as you go

Descriptive codes

  • In vivo coding / Semantic coding : This method uses words or short phrases directly from the participant’s own language as codes. It deals with the surface-level content, labeling what participants directly say or describe. It identifies keywords, phrases, or sentences that capture the literal content. Participant : “I was just so overwhelmed with everything.” Code : “overwhelmed”
  • Process coding : Uses gerunds (“-ing” words) to connote observable or conceptual action in the data. Participant : “I started by brainstorming ideas, then I narrowed them down.” Codes : “brainstorming ideas,” “narrowing down”
  • Open coding : A form of initial coding where the researcher remains open to any possible theoretical directions indicated by the data. Participant : “I found the class really challenging, but I learned a lot.” Codes : “challenging class,” “learning experience”
  • Descriptive coding : Summarizes the primary topic of a passage in a word or short phrase. Participant : “I usually study in the library because it’s quiet.” Code : “study environment”

Step 4: Review and refine codes

  • Look over your initial codes and see if any can be combined, split up, or revised
  • Ensure your code names clearly convey the meaning of the data
  • Check if your codes are applied consistently across the dataset
  • Get a second opinion from a peer or advisor if possible

Interpretive codes

Interpretive codes go beyond simple description and reflect the researcher’s understanding of the underlying meanings, experiences, or processes captured in the data.

These codes require the researcher to interpret the participants’ words and actions in light of the research questions and theoretical framework.

For example, latent coding is a type of interpretive coding which goes beyond surface meaning in data. It digs for underlying emotions, motivations, or unspoken ideas the participant might not explicitly state

Latent coding looks for subtext, interprets the “why” behind what’s said, and considers the context (e.g. cultural influences, or unconscious biases).

  • Example: A participant might say, “Whenever I see a spider, I feel like I’m going to pass out. It takes me back to a bad experience as a kid.” A latent code here could be “Feelings of Panic Triggered by Spiders” because it goes beyond the surface fear and explores the emotional response and potential cause.

It’s useful to ask yourself the following questions:

  • What are the assumptions made by the participants? 
  • What emotions or feelings are expressed or implied in the data?
  • How do participants relate to or interact with others in the data?
  • How do the participants’ experiences or perspectives change over time?
  • What is surprising, unexpected, or contradictory in the data?
  • What is not being said or shown in the data? What are the silences or absences?

Theoretical codes

Theoretical codes are the most abstract and conceptual type of codes. They are used to link the data to existing theories or to develop new theoretical insights.

Theoretical codes often emerge later in the analysis process, as researchers begin to identify patterns and connections across the descriptive and interpretive codes.

  • Structural coding : Applies a content-based phrase to a segment of data that relates to a specific research question. Research question : What motivates students to succeed? Participant : “I want to make my parents proud and be the first in my family to graduate college.” Interpretive Code : “family motivation” Theoretical code : “Social identity theory”
  • Value coding : This method codes data according to the participants’ values, attitudes, and beliefs, representing their perspectives or worldviews. Participant : “I believe everyone deserves access to quality healthcare.” Interpretive Code : “healthcare access” (value) Theoretical code : “Distributive justice”

Pattern codes

Pattern coding is often used in the later stages of data analysis, after the researcher has thoroughly familiarized themselves with the data and identified initial descriptive and interpretive codes.

By identifying patterns and relationships across the data, pattern codes help to develop a more coherent and meaningful understanding of the phenomenon and can contribute to theory development or refinement.

For Example

Let’s say a researcher is studying the experiences of new mothers returning to work after maternity leave. They conduct interviews with several participants and initially use descriptive and interpretive codes to analyze the data. Some of these codes might include:

  • “Guilt about leaving baby”
  • “Struggle to balance work and family”
  • “Support from colleagues”
  • “Flexible work arrangements”
  • “Breastfeeding challenges”

As the researcher reviews the coded data, they may notice that several of these codes relate to the broader theme of “work-family conflict.”

They might create a pattern code called “Navigating work-family conflict” that pulls together the various experiences and challenges described by the participants.

qualitative research

Related Articles

What Is a Focus Group?

Research Methodology

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Convergent Validity: Definition and Examples

Convergent Validity: Definition and Examples

the number of participants in qualitative research

CRO Platform

Test your insights. Run experiments. Win. Or learn. And then win.

the number of participants in qualitative research

eCommerce Customer Analytics Platform

the number of participants in qualitative research

Acquisition matters. But retention matters more. Understand, monitor & nurture the best customers.

  • Case Studies
  • Ebooks, Tools, Templates
  • Digital Marketing Glossary
  • eCommerce Growth Stories
  • eCommerce Growth Show
  • Help & Technical Documentation

CRO Guide   >  Chapter 3.1

Qualitative Research: Definition, Methodology, Limitation & Examples

Qualitative research is a method focused on understanding human behavior and experiences through non-numerical data. Examples of qualitative research include:

  • One-on-one interviews,
  • Focus groups, Ethnographic research,
  • Case studies,
  • Record keeping,
  • Qualitative observations

In this article, we’ll provide tips and tricks on how to use qualitative research to better understand your audience through real world examples and improve your ROI. We’ll also learn the difference between qualitative and quantitative data.

gathering data

Table of Contents

Marketers often seek to understand their customers deeply. Qualitative research methods such as face-to-face interviews, focus groups, and qualitative observations can provide valuable insights into your products, your market, and your customers’ opinions and motivations. Understanding these nuances can significantly enhance marketing strategies and overall customer satisfaction.

What is Qualitative Research

Qualitative research is a market research method that focuses on obtaining data through open-ended and conversational communication. This method focuses on the “why” rather than the “what” people think about you. Thus, qualitative research seeks to uncover the underlying motivations, attitudes, and beliefs that drive people’s actions. 

Let’s say you have an online shop catering to a general audience. You do a demographic analysis and you find out that most of your customers are male. Naturally, you will want to find out why women are not buying from you. And that’s what qualitative research will help you find out.

In the case of your online shop, qualitative research would involve reaching out to female non-customers through methods such as in-depth interviews or focus groups. These interactions provide a platform for women to express their thoughts, feelings, and concerns regarding your products or brand. Through qualitative analysis, you can uncover valuable insights into factors such as product preferences, user experience, brand perception, and barriers to purchase.

Types of Qualitative Research Methods

Qualitative research methods are designed in a manner that helps reveal the behavior and perception of a target audience regarding a particular topic.

The most frequently used qualitative analysis methods are one-on-one interviews, focus groups, ethnographic research, case study research, record keeping, and qualitative observation.

1. One-on-one interviews

Conducting one-on-one interviews is one of the most common qualitative research methods. One of the advantages of this method is that it provides a great opportunity to gather precise data about what people think and their motivations.

Spending time talking to customers not only helps marketers understand who their clients are, but also helps with customer care: clients love hearing from brands. This strengthens the relationship between a brand and its clients and paves the way for customer testimonials.

  • A company might conduct interviews to understand why a product failed to meet sales expectations.
  • A researcher might use interviews to gather personal stories about experiences with healthcare.

These interviews can be performed face-to-face or on the phone and usually last between half an hour to over two hours. 

When a one-on-one interview is conducted face-to-face, it also gives the marketer the opportunity to read the body language of the respondent and match the responses.

2. Focus groups

Focus groups gather a small number of people to discuss and provide feedback on a particular subject. The ideal size of a focus group is usually between five and eight participants. The size of focus groups should reflect the participants’ familiarity with the topic. For less important topics or when participants have little experience, a group of 10 can be effective. For more critical topics or when participants are more knowledgeable, a smaller group of five to six is preferable for deeper discussions.

The main goal of a focus group is to find answers to the “why”, “what”, and “how” questions. This method is highly effective in exploring people’s feelings and ideas in a social setting, where group dynamics can bring out insights that might not emerge in one-on-one situations.

  • A focus group could be used to test reactions to a new product concept.
  • Marketers might use focus groups to see how different demographic groups react to an advertising campaign.

One advantage that focus groups have is that the marketer doesn’t necessarily have to interact with the group in person. Nowadays focus groups can be sent as online qualitative surveys on various devices.

Focus groups are an expensive option compared to the other qualitative research methods, which is why they are typically used to explain complex processes.

3. Ethnographic research

Ethnographic research is the most in-depth observational method that studies individuals in their naturally occurring environment.

This method aims at understanding the cultures, challenges, motivations, and settings that occur.

  • A study of workplace culture within a tech startup.
  • Observational research in a remote village to understand local traditions.

Ethnographic research requires the marketer to adapt to the target audiences’ environments (a different organization, a different city, or even a remote location), which is why geographical constraints can be an issue while collecting data.

This type of research can last from a few days to a few years. It’s challenging and time-consuming and solely depends on the expertise of the marketer to be able to analyze, observe, and infer the data.

4. Case study research

The case study method has grown into a valuable qualitative research method. This type of research method is usually used in education or social sciences. It involves a comprehensive examination of a single instance or event, providing detailed insights into complex issues in real-life contexts.  

  • Analyzing a single school’s innovative teaching method.
  • A detailed study of a patient’s medical treatment over several years.

Case study research may seem difficult to operate, but it’s actually one of the simplest ways of conducting research as it involves a deep dive and thorough understanding of the data collection methods and inferring the data.

5. Record keeping

Record keeping is similar to going to the library: you go over books or any other reference material to collect relevant data. This method uses already existing reliable documents and similar sources of information as a data source.

  • Historical research using old newspapers and letters.
  • A study on policy changes over the years by examining government records.

This method is useful for constructing a historical context around a research topic or verifying other findings with documented evidence.

6. Qualitative observation

Qualitative observation is a method that uses subjective methodologies to gather systematic information or data. This method deals with the five major sensory organs and their functioning, sight, smell, touch, taste, and hearing.

  • Sight : Observing the way customers visually interact with product displays in a store to understand their browsing behaviors and preferences.
  • Smell : Noting reactions of consumers to different scents in a fragrance shop to study the impact of olfactory elements on product preference.
  • Touch : Watching how individuals interact with different materials in a clothing store to assess the importance of texture in fabric selection.
  • Taste : Evaluating reactions of participants in a taste test to identify flavor profiles that appeal to different demographic groups.
  • Hearing : Documenting responses to changes in background music within a retail environment to determine its effect on shopping behavior and mood.

Below we are also providing real-life examples of qualitative research that demonstrate practical applications across various contexts:

Qualitative Research Real World Examples

Let’s explore some examples of how qualitative research can be applied in different contexts.

1. Online grocery shop with a predominantly male audience

Method used: one-on-one interviews.

Let’s go back to one of the previous examples. You have an online grocery shop. By nature, it addresses a general audience, but after you do a demographic analysis you find out that most of your customers are male.

One good method to determine why women are not buying from you is to hold one-on-one interviews with potential customers in the category.

Interviewing a sample of potential female customers should reveal why they don’t find your store appealing. The reasons could range from not stocking enough products for women to perhaps the store’s emphasis on heavy-duty tools and automotive products, for example. These insights can guide adjustments in inventory and marketing strategies.

2. Software company launching a new product

Method used: focus groups.

Focus groups are great for establishing product-market fit.

Let’s assume you are a software company that wants to launch a new product and you hold a focus group with 12 people. Although getting their feedback regarding users’ experience with the product is a good thing, this sample is too small to define how the entire market will react to your product.

So what you can do instead is holding multiple focus groups in 20 different geographic regions. Each region should be hosting a group of 12 for each market segment; you can even segment your audience based on age. This would be a better way to establish credibility in the feedback you receive.

3. Alan Pushkin’s “God’s Choice: The Total World of a Fundamentalist Christian School”

Method used: ethnographic research.

Moving from a fictional example to a real-life one, let’s analyze Alan Peshkin’s 1986 book “God’s Choice: The Total World of a Fundamentalist Christian School”.

Peshkin studied the culture of Bethany Baptist Academy by interviewing the students, parents, teachers, and members of the community alike, and spending eighteen months observing them to provide a comprehensive and in-depth analysis of Christian schooling as an alternative to public education.

The study highlights the school’s unified purpose, rigorous academic environment, and strong community support while also pointing out its lack of cultural diversity and openness to differing viewpoints. These insights are crucial for understanding how such educational settings operate and what they offer to students.

Even after discovering all this, Peshkin still presented the school in a positive light and stated that public schools have much to learn from such schools.

Peshkin’s in-depth research represents a qualitative study that uses observations and unstructured interviews, without any assumptions or hypotheses. He utilizes descriptive or non-quantifiable data on Bethany Baptist Academy specifically, without attempting to generalize the findings to other Christian schools.

4. Understanding buyers’ trends

Method used: record keeping.

Another way marketers can use quality research is to understand buyers’ trends. To do this, marketers need to look at historical data for both their company and their industry and identify where buyers are purchasing items in higher volumes.

For example, electronics distributors know that the holiday season is a peak market for sales while life insurance agents find that spring and summer wedding months are good seasons for targeting new clients.

5. Determining products/services missing from the market

Conducting your own research isn’t always necessary. If there are significant breakthroughs in your industry, you can use industry data and adapt it to your marketing needs.

The influx of hacking and hijacking of cloud-based information has made Internet security a topic of many industry reports lately. A software company could use these reports to better understand the problems its clients are facing.

As a result, the company can provide solutions prospects already know they need.

Real-time Customer Lifetime Value (CLV) Benchmark Report

See where your business stands compared to 1,000+ e-stores in different industries.

35 reports by industry and business size.

Qualitative Research Approaches

Once the marketer has decided that their research questions will provide data that is qualitative in nature, the next step is to choose the appropriate qualitative approach.

The approach chosen will take into account the purpose of the research, the role of the researcher, the data collected, the method of data analysis , and how the results will be presented. The most common approaches include:

  • Narrative : This method focuses on individual life stories to understand personal experiences and journeys. It examines how people structure their stories and the themes within them to explore human existence. For example, a narrative study might look at cancer survivors to understand their resilience and coping strategies.
  • Phenomenology : attempts to understand or explain life experiences or phenomena; It aims to reveal the depth of human consciousness and perception, such as by studying the daily lives of those with chronic illnesses.
  • Grounded theory : investigates the process, action, or interaction with the goal of developing a theory “grounded” in observations and empirical data. 
  • Ethnography : describes and interprets an ethnic, cultural, or social group;
  • Case study : examines episodic events in a definable framework, develops in-depth analyses of single or multiple cases, and generally explains “how”. An example might be studying a community health program to evaluate its success and impact.

How to Analyze Qualitative Data

Analyzing qualitative data involves interpreting non-numerical data to uncover patterns, themes, and deeper insights. This process is typically more subjective and requires a systematic approach to ensure reliability and validity. 

1. Data Collection

Ensure that your data collection methods (e.g., interviews, focus groups, observations) are well-documented and comprehensive. This step is crucial because the quality and depth of the data collected will significantly influence the analysis.

2. Data Preparation

Once collected, the data needs to be organized. Transcribe audio and video recordings, and gather all notes and documents. Ensure that all data is anonymized to protect participant confidentiality where necessary.

3. Familiarization

Immerse yourself in the data by reading through the materials multiple times. This helps you get a general sense of the information and begin identifying patterns or recurring themes.

Develop a coding system to tag data with labels that summarize and account for each piece of information. Codes can be words, phrases, or acronyms that represent how these segments relate to your research questions.

  • Descriptive Coding : Summarize the primary topic of the data.
  • In Vivo Coding : Use language and terms used by the participants themselves.
  • Process Coding : Use gerunds (“-ing” words) to label the processes at play.
  • Emotion Coding : Identify and record the emotions conveyed or experienced.

5. Thematic Development

Group codes into themes that represent larger patterns in the data. These themes should relate directly to the research questions and form a coherent narrative about the findings.

6. Interpreting the Data

Interpret the data by constructing a logical narrative. This involves piecing together the themes to explain larger insights about the data. Link the results back to your research objectives and existing literature to bolster your interpretations.

7. Validation

Check the reliability and validity of your findings by reviewing if the interpretations are supported by the data. This may involve revisiting the data multiple times or discussing the findings with colleagues or participants for validation.

8. Reporting

Finally, present the findings in a clear and organized manner. Use direct quotes and detailed descriptions to illustrate the themes and insights. The report should communicate the narrative you’ve built from your data, clearly linking your findings to your research questions.

Limitations of qualitative research

The disadvantages of qualitative research are quite unique. The techniques of the data collector and their own unique observations can alter the information in subtle ways. That being said, these are the qualitative research’s limitations:

1. It’s a time-consuming process

The main drawback of qualitative study is that the process is time-consuming. Another problem is that the interpretations are limited. Personal experience and knowledge influence observations and conclusions.

Thus, qualitative research might take several weeks or months. Also, since this process delves into personal interaction for data collection, discussions often tend to deviate from the main issue to be studied.

2. You can’t verify the results of qualitative research

Because qualitative research is open-ended, participants have more control over the content of the data collected. So the marketer is not able to verify the results objectively against the scenarios stated by the respondents. For example, in a focus group discussing a new product, participants might express their feelings about the design and functionality. However, these opinions are influenced by individual tastes and experiences, making it difficult to ascertain a universally applicable conclusion from these discussions.

3. It’s a labor-intensive approach

Qualitative research requires a labor-intensive analysis process such as categorization, recording, etc. Similarly, qualitative research requires well-experienced marketers to obtain the needed data from a group of respondents.

4. It’s difficult to investigate causality

Qualitative research requires thoughtful planning to ensure the obtained results are accurate. There is no way to analyze qualitative data mathematically. This type of research is based more on opinion and judgment rather than results. Because all qualitative studies are unique they are difficult to replicate.

5. Qualitative research is not statistically representative

Because qualitative research is a perspective-based method of research, the responses given are not measured.

Comparisons can be made and this can lead toward duplication, but for the most part, quantitative data is required for circumstances that need statistical representation and that is not part of the qualitative research process.

While doing a qualitative study, it’s important to cross-reference the data obtained with the quantitative data. By continuously surveying prospects and customers marketers can build a stronger database of useful information.

Quantitative vs. Qualitative Research

Qualitative and quantitative research side by side in a table

Image source

Quantitative and qualitative research are two distinct methodologies used in the field of market research, each offering unique insights and approaches to understanding consumer behavior and preferences.

As we already defined, qualitative analysis seeks to explore the deeper meanings, perceptions, and motivations behind human behavior through non-numerical data. On the other hand, quantitative research focuses on collecting and analyzing numerical data to identify patterns, trends, and statistical relationships.  

Let’s explore their key differences: 

Nature of Data:

  • Quantitative research : Involves numerical data that can be measured and analyzed statistically.
  • Qualitative research : Focuses on non-numerical data, such as words, images, and observations, to capture subjective experiences and meanings.

Research Questions:

  • Quantitative research : Typically addresses questions related to “how many,” “how much,” or “to what extent,” aiming to quantify relationships and patterns.
  • Qualitative research: Explores questions related to “why” and “how,” aiming to understand the underlying motivations, beliefs, and perceptions of individuals.

Data Collection Methods:

  • Quantitative research : Relies on structured surveys, experiments, or observations with predefined variables and measures.
  • Qualitative research : Utilizes open-ended interviews, focus groups, participant observations, and textual analysis to gather rich, contextually nuanced data.

Analysis Techniques:

  • Quantitative research: Involves statistical analysis to identify correlations, associations, or differences between variables.
  • Qualitative research: Employs thematic analysis, coding, and interpretation to uncover patterns, themes, and insights within qualitative data.

the number of participants in qualitative research

Do Conversion Rate Optimization the Right way.

Explore helps you make the most out of your CRO efforts through advanced A/B testing, surveys, advanced segmentation and optimised customer journeys.

An isometric image of an adobe adobe adobe adobe ad.

If you haven’t subscribed yet to our newsletter, now is your chance!

A man posing happily in front of a vivid purple background for an engaging blog post.

Like what you’re reading?

Join the informed ecommerce crowd.

We will never bug you with irrelevant info.

By clicking the Button, you confirm that you agree with our Terms and Conditions .

Continue your Conversion Rate Optimization Journey

  • Last modified: January 3, 2023
  • Conversion Rate Optimization , User Research

Valentin Radu

Valentin Radu

Omniconvert logo on a black background.

We’re a team of people that want to empower marketers around the world to create marketing campaigns that matter to consumers in a smart way. Meet us at the intersection of creativity, integrity, and development, and let us show you how to optimize your marketing.

Our Software

  • > Book a Demo
  • > Partner Program
  • > Affiliate Program
  • Blog Sitemap
  • Terms and Conditions
  • Privacy & Security
  • Cookies Policy
  • REVEAL Terms and Conditions
  • Open access
  • Published: 18 May 2024

Identifying primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore: a qualitative study

  • Mauricette Moling Lee 1 , 2 ,
  • Wern Ee Tang 3 ,
  • Helen Elizabeth Smith 4 &
  • Lorainne Tudor Car 1 , 5  

BMC Primary Care volume  25 , Article number:  172 ( 2024 ) Cite this article

64 Accesses

Metrics details

The growth of medical knowledge and patient care complexity calls for improved clinician access to evidence-based resources. This study aimed to explore the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore.

A convenience sample of ten doctors and ten nurses was recruited. We conducted semi-structured face-to-face in-depth interviews. The interviews were recorded, transcribed verbatim, and analysed using thematic content analysis.

Of the 20 participants, eight doctors and ten nurses worked at government-funded polyclinics and two doctors worked in private practice. Most clinicians sought clinical information daily at the point-of-care. The most searched-for information by clinicians in practice was less common conditions. Clinicians preferred evidence-based resources such as clinical practice guidelines and UpToDate®. Clinical practice guidelines were mostly used when they were updated or based on memory. Clinicians also commonly sought answers from their peers. Furthermore, clinicians frequently use smartphones to access the Google search engine and UpToDate® app. The barriers to accessing clinical information included the lack of time, internet surfing separation of work computers, limited search functions in the organisation’s server, and limited access to medical literature databases. The facilitators of accessing clinical information included convenience, easy access, and trustworthiness of information sources.

Most primary care clinicians in our study sought clinical information at the point-of-care daily and reported increasing use of smartphones for information-seeking. Future research focusing on interventions to improve access to credible clinical information for primary care clinicians at the point-of-care is recommended.

Trial registration

This study has been reviewed by NHG Domain Specific Review Board (NHG DSRB) (the central ethics committee) for ethics approval. NHG DSRB Reference Number: 2018/01355 (31/07/2019).

Peer Review reports

Primary care clinicians provide the bulk of care to patients in primary care settings. In Singapore, there are 23 polyclinics and about 1,800 General Practitioner (GP) clinics with private GPs providing primary care for about 80% of the population [ 1 ]. The primary care clinicians provide primary care services at community polyclinics and private medical clinics around Singapore [ 1 ]. The polyclinics are formed by three healthcare groups – National Healthcare Group, National University Health System, and SingHealth [ 1 ]. These polyclinics served various populations in Singapore's central, northern, north-eastern, western, and eastern parts [ 1 ]. Every day, clinicians make many clinical decisions, ranging from diagnosis and prognosis to treatment and patient management [ 2 , 3 ]. However, to provide consistent high-quality patient care, such clinical judgments must be informed by existing trustworthy medical evidence [ 4 , 5 , 6 ]. To meet their information needs, clinicians seek relevant information from various sources of information [ 3 ]. Searching for and using the information to meet information needs has been described as information-seeking behaviour [ 7 , 8 , 9 ].

Previous research showed that clinicians often raise questions about patient care in their practice [ 10 ]. Half of those questions are left unanswered. Identifying what information primary care clinicians need, how they search for required information and how they adopt it into practice is essential in ensuring safe and high-quality patient care [ 11 , 12 ]. While there are reports of information-seeking behaviour in primary care from other countries [ 2 , 8 , 13 , 14 ], similar reports in Singapore are limited.

Clinicians may consult several sources to support their decisions, including clinical practice guidelines (CPGs), journal articles, peers, and more [ 3 ]. However, there is a wide variation in the adoption of evidence-based practices across healthcare disciplines, which could lead to poorer primary care outcomes [ 8 , 12 , 15 , 16 , 17 , 18 , 19 ]. To mitigate this, a commonly employed approach is the development of CPGs, clinical pathways, or care guides [ 20 ]. They offer a structured, reliable, and consistent approach to healthcare evidence dissemination and reduce unnecessary clinical practice variation [ 21 ]. However, CPGs are costly to develop and update, context-specific, and unevenly adopted across various healthcare systems [ 22 ]. CPG's uptake is affected by diverse factors such as presentation formats, time pressures, reputability, and ownership [ 14 , 23 ]. Conversely, other sources of clinical practice-related information may not be as valid, credible, or current as CPGs.

Increasingly, healthcare professionals worldwide use their smartphones as an important channel for clinical information [ 24 , 25 , 26 , 27 ], using them to access websites, mobile apps or communicate with peers [ 28 ]. The use of electronic resources improves clinicians' knowledge and behaviour as well as patients' outcomes [ 29 ]. However, evidence on how smartphones are used at the point-of-care, particularly for evidence-seeking, is limited. Singapore, with a total population of 5.92 million as of the end of June 2023 [ 30 ], is one of the countries with the highest smartphone usage among its residents, with approximately 5.72 million (97%) users in 2023 [ 31 ]. Correspondingly, smartphones may be an important information-seeking channel among primary care clinicians. However, the increasing cyber threats worldwide may lead to internet surfing separation as a common security measure.

Institutional policies limiting access to computers at the point-of-care deter clinicians from seeking information and disrupt their workflow [ 32 ]. Due to patient data privacy breaches, the Singapore Ministry of Health introduced internet surfing separation as a security measure in July 2018 in all public healthcare institutions in Singapore [ 33 ]. Internet surfing separation stands for the restrictions on internet access and browsing which were enforced in Singapore public healthcare institutions in 2018 due to patient data privacy breaches [ 33 ]. This has limited the internet access of primary care clinicians at the workplace. Since its introduction, the Internet has not been accessible from any of the clinic's desktop computers and has been available through a few work laptops with limited availability to the polyclinic staff. At the time that this research was conducted, primary care clinicians in the public healthcare sector in Singapore did not have access to the internet from their work computers. Clinicians rely on evidence-based information to make informed decisions about patient care [ 4 , 5 , 6 ]. When access to online resources is restricted, clinicians may struggle to receive current and correct information, thus jeopardising patient safety and the quality of care offered [ 11 , 12 ]. Therefore, we sought to understand how primary care clinicians were addressing their clinical information needs when their work computers were not available to access evidence-based resources online. This study aimed to explore the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore.

A qualitative study consisting of semi-structured face-to-face in-depth interviews was used to explore the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore. The interviews were conducted between August and November 2019 at two polyclinics and two private clinics in Singapore.

The study was approved by the institutional ethics committee (NHG DSRB Reference Number: 2018/01355). All participants read the study information sheet before providing written consent. This study followed the Consolidated Criteria for Reporting Qualitative Research guidelines [ 34 ] [see Additional file 1].

Participants and recruitment

We included primary care doctors and registered nurses from the polyclinics and private primary care practices aged ≥ 21 years who were fluent in English. We employed convenience sampling in this study. Prospective participants were recruited from various polyclinics through personal contacts and advertisements. Five potential participants were contacted but did not respond to the invitation, two potential participants declined participation in this study and one potential participant resigned before the commencement of the study and hence did not participate in the study.

Data collection

The interviews were conducted by a female researcher (MML) in designated private meeting rooms or consultation rooms at various polyclinics or the respective consultation rooms of the private practice. MML was provided with sufficient details, resources, and training on qualitative research before the study commencement. Before the start of the interview, the researcher introduced herself, stated the aim of the interview, explained confidentiality, and obtained informed consent and permission to use a digital voice recorder. The interviewees could pause the interviews due to professional responsibilities at any time. MML conducted the interviews using an interview guide based on a review of the relevant literature and team discussions [ 10 ] [see Additional file 2]. The interview topics included the type of questions during clinical encounters, commonly employed sources of clinical information, frequency and timing of information-seeking, satisfaction with existing information sources, use of CPGs, barriers to information-seeking, and reliability of obtained information. All interview sessions lasted not more than 60 minutes with a mean interview time of 25 minutes and were digitally recorded and transcribed. Field notes were taken during the interviews for further analysis. Data saturation, defined as no new themes arising after three consecutive interviews [ 35 ], was achieved after 20 interviews, therefore we stopped recruitment at 20 participants. Participants were compensated with a SGD25 voucher and a meal upon completion of the interview.

Data analysis

The qualitative data were analysed using Burnard’s method, a structured approach for thematic content analysis established in 1991 [ 36 ]. Burnard's method includes fourteen stages for categorising and coding interview transcripts [ 36 ] [see Additional file 4]. Types of questions were analysed using Ely’s classification [ 37 ]. Burnard's method enhances understanding of the information-seeking behaviour patterns found by Ely's approach by doing a comprehensive evaluation. Ely et al. (2000) developed an approach for categorising clinician queries about patient care [ 37 ]. Clinical questions in primary care were divided into several main categories. For example, the three most common categories of questions based on Ely’s approach were "What is the drug of choice for condition x?", "What is the cause of symptom x?" and "What test is indicated in situation x?" [ 37 ]. Ely et al. (2000) framework was used by the study team to gain a better understanding of clinicians' information needs and to identify the types of questions they had about patient care. It was used mainly to facilitate the study team’s discussion. The study team did not adopt the categories. The analysis was done independently and in parallel by two researchers (MML and LTC). First, the researchers familiarised themselves with the transcripts by reading them multiple times. Second, the initial codes were proposed. Third, the themes were derived from the codes. Fourth, the researchers discussed and combined their themes for comparison. Finally, they reached a consensus on the themes and how to define them. Apart from the initial stages of being acquainted with the transcripts and recommended initial codes, to streamline our codes, related codes were consolidated into more comprehensive headings. This process allows us to organise them more effectively under pertinent subthemes. For example, various information sources that were mentioned by the participants such as evidence-based resources, non-evidence-based resources, and colleagues have all been merged into a subtheme titled "popular information sources" [see Additional file 3]. This process was done iteratively through several rounds. The final list of themes and subthemes was created by removing repeated or similar subthemes. Two other study team members independently created a list of headings without using the first study team member's list. Three lists were discussed and improved to increase validity and reduce researcher bias. Finally, we employed abstraction by developing a basic description of the phenomenon under investigation to establish the final subthemes and themes. Tables 1 and 2 illustrate how these stages were conducted.

Table 1 illustrates that the previous "subtheme" for "rare condition" was "most searched information in clinical practice," but it has been revised to "the type of information needs" to include numerous codes such as pharmacology and others following additional discussion with study team members. A third reviewer HES acted as an arbiter. The coding of transcripts was performed using a word processor. A predetermined classification system was not employed since there was insufficient research to inform the clinicians' perceptions of information-seeking behaviour in Singapore. In particular, the dynamic identification of themes from data was facilitated using an inductive approach. Burnard's method was applied inductively to establish categories and abstraction through open coding illustrated in Tables 1 and 2 . No single method of analysis is appropriate for every type of interview data [ 36 ]. Burnard’s method focuses on a systematic approach to thematic content analysis, which can improve qualitative research objectivity and transparency [ 36 ]. As descriptive studies can investigate perceived barriers to and facilitators of adopting new behaviours [ 38 ], a more descriptive set of themes was appropriate for the study's objectives, and it is consistent with Burnard's method [ 36 ].

A total of 20 clinicians were recruited. Eight doctors and 10 nurses were working in the polyclinics. All nurses and three doctors who participated in this study were females. The demographics of the clinicians is represented in Table  3 . Demographics of clinicians ( N  = 20).

Thematic analysis

Three distinct themes were derived from the analysis of the interview data, 1) the choice of information sources, 2) accessing information sources, and 3) the role of evidence in information-seeking [see Additional file 3]. This is represented in Fig.  1 . Themes and subthemes derived from the interviews.

figure 1

Themes and subthemes derived from the interviews

1) The choice of information sources

Is a theme that encompasses different sources clinicians in our study used to seek and gather information. Clinicians' preferred choice of information sources in five subthemes: popular information sources, CPGs as an information source, internet as an information source, peers as an information source and accessing online information using smartphones

Popular information sources

Clinicians mentioned that their first choice point-of-care evidence-based online sources were UpToDate®, an evidence-based resource that helps clinicians make decisions and informs their practice [ 39 ], CPGs and the Monthly Index of Medical Specialties, followed by PubMed (Medline) and continuing medication education sources. A non-evidence-based information source, the Google search engine was commonly mentioned as well. Lastly, clinicians often mentioned consulting their colleagues:

“I will Google, look for images and compare…I tell them that I’m looking because I am not sure, and I want to just confirm…sometimes even show them the photo on my phone, to ensure…what they saw, the rash…might have already disappeared is…what I suspect it is.” Doctor02.
“I commonly I would search…this app that I have on my phone is called UpToDate®, right…because it’s the most easiest…easily accessible source of information…I’ll just type the whole lot into…the Lexicomp component of the UpToDate® and then from there it tells me whether the drugs have interactions, what kind of interactions.” Doctor07.

CPGs as an information source

 Clinicians mentioned that CPGs did not apply to all patients. Doctors described CPGs as evidence-based resources, designed to be safe and most relevant to practice as a baseline reference. Doctors considered CPGs lengthy at times and there was a need to apply clinical discretion when using them. Doctors also mentioned that CPGs focused sometimes on cost-effectiveness instead of the quality of care:

“I think they are useful in summarising the latest evidence and what…is recommended, especially if they are local clinical practice guidelines, then it’s tailored to our own population…And keeping in mind perhaps the cost sensitivities, cost effectiveness” Doctor02.

Nurses said that they saw CPGs as a standard of practice for clinicians and an easy resource to refer to. However, some nurses said that they found CPGs difficult to access and outdated:

“…but it’s not so…easy to access…because you have to…enter certain keywords, and sometimes it’s not that keyword that’s going to churn out all the information you see…like, try a few times…want to make sure that…I’m doing things correctly…following the guidelines…just quickly…log into the intranet and…search for the information.” Nurse01.

If nurses had difficulty accessing CPGs, they said that they tended to seek doctors’ opinion:

“It’s very informative. It’s quite clear, easy to refer to…in certain special cases…not stated in the book, we will still have to seek…doctor’s opinion” Nurse07.

Internet as an information source

Clinicians mentioned that the internet provided access to clinical information for practice. However, clinicians mentioned that it was important to ensure that the information was well-grounded and dependable:

“…some…information might not be…so trustworthy…takes…a little additional filtering process before…I can say this is a reliable source or not…some of the websites…more opinion-based…very high…chance of bias…the reference from that writing…written at the bottom where I can do…cross-checking…I think the credibility…for this…article written is slightly higher.” Doctor01.
“If only you have an internet, you can always show it to the patient also. For example, when I search for some information, I can even help in patient education…for now, I feel it is a bit harder…And then I have to rely on my phone to use the UpToDate®.” Doctor03.

Peers as an information source

Clinicians mentioned approaching peers who were available to seek a second opinion on their clinical questions. They also mentioned that they tended to approach experts:

“…it’s really a case-to-case basis and it depends if the colleagues around…Also it depends on the proximity of the colleague. If the colleague knows a lot but…busy in another room on another level then I might approach next door colleagues instead.” Doctor06. “I think most of time, if we are going to get our information immediately, we’ll call one of our colleagues here…discuss the case…we’ll come to a consensus, what will be the best for our kind of patient…contribute to the informed decision immediately.” Doctor01.

Accessing online information using smartphones

Clinicians mentioned that their smartphones were convenient for accessing information for practice. For instance, accessing the UpToDate® app and Google search engine using smartphones:

“…commonly I would search…this app that I have on my phone is called UpToDate®…because it’s the most easiest…easily accessible source of information…I’ll just type the whole lot into…the Lexicomp component of…UpToDate® and then from there it tells me whether the drugs have interactions.” Doctor07.
“I will go on the internet…if I needed information about…certain medical conditions…Just definitions, just to have an idea of, you know… Correct, pure Google.” Nurse01.

2) Accessing information sources

Is a theme that encompasses different aspects of information-seeking and access by clinicians in our study. Factors influencing clinicians’ utilisation of information sources in five subthemes: type of information needs, the timing and frequency of information needs, the timing and frequency of using CPGs, information-seeking facilitators and information-seeking barriers.

The type of information needs

Clinicians mentioned that they commonly sought information on less common health areas such as unusual skin rashes, rare diseases, paediatrics, women’s health, medications, and at times concerning all clinical areas:

“Drug information…maybe dosing and everything…when we are prescribing for paediatric…we also see female patients who are pregnant…Lactating, and all… contraindicated” Doctor03.
“Other ones that I would search for would be if the patient comes in with very…unusual presentations.” Doctor07.

The timing and frequency of information needs

Clinicians explained that they commonly seek clinical information daily or several times a week. They said that they either seek information at the point-of-care or at home:

“I will look at least weekly once…It’s of my own interest…Not during working times, most of the time…When we are travelling, in MRT…Sometimes at home also.” Nurse10.
“Not so many cases…It’s quite rare, actually…Because most of our cases are quite common…we still can deal with…Yes…Maybe once a few weeks…Once a month…When I have concerns or any doubts…After patient left…yes. Maybe, sometimes…And after the doctors consult.” Nurse05.

The timing and frequency of using CPGs

Clinicians said that they commonly use CPGs daily or when there was a change or update to the CPGs:

“…day to day, because all these guidelines I’m familiar with, it’s in my memory…internally we do have guidelines for certain acute conditions.” Doctor02.

Clinicians discussed convenience, easy access, the trustworthiness of information, having colleagues who are specialists, and being keen to keep up-to-date as the facilitators to seeking clinical information:

Information-seeking facilitators

“I find…clinical practice guidelines quite useful…since it’s on our terminal. I do open that up to look at it…it does give us quite a convenient and no fuss way to be able to access them on our terminal while we are seeking information whether during or even after consults.” Doctor06.
“work instructions…Policies and protocols…Intranet…So I just want to make sure that…I’m doing things correctly, that I’m, you know, following the guidelines. So I’ll just quickly enter, you know, log into the intranet and just search for the information…The information that’s on the intranet has, you know, been validated by an expert, you know…So that’s why I rely heavily on it.” Nurse01.

Information-seeking barriers

Clinicians mentioned that internet surfing separation, the lack of time, limited access to medical literature databases, and limited search function in the organisation’s server were barriers to seeking clinical information:

“The information I know is there…But it’s not so easy to search for…Not user-friendly, not very exhaustive…Sometimes you just have to…trial-and-error…different keywords.” Nurse01.

Additionally, clinicians frequently mentioned using smartphones to access clinical information. Consequently, doctors said that they were worried that using smartphones during a clinical consultation might make them seem unprofessional to patients:

“I need to explain to the patient that…I am using my phone because I don’t have internet access or may appear rude to the patient; I am surfing my phone in the middle of the consult.” Doctor02.

Doctors reported that they were also concerned about their privacy when they showed their smartphones to their patients:

“…sometimes…you don’t want to show your phone to them(patients) also…Because sometimes you may have other notifications.” Doctor05.

3) The role of evidence in information-seeking

Is a theme that explores the role of evidence in clinicians' information-seeking in our study. The value of scientific research for clinicians seeking information in two subthemes: the importance of trustworthy information sources and employing evidence-based information sources.

The importance of trustworthy information sources

Clinicians agreed that peer-reviewed clinical information was reliable. Additionally, doctors expressed trust in clinical information if there were frequent updates of the content:

“…they(UpToDate®) do put…the date of which they have updated the articles…it’s from multiple sources…citations and…management…seems quite sound.” Doctor06.
“The information that’s on the intranet has…been validated by an expert.” Nurse01.

Employing evidence-based information sources

Clinicians mentioned that emphasising the importance of evidence in patient care and building an evidence-based culture in the workplace helps to encourage the use of evidence-based information sources in practice.

“I don’t have any concrete kind of suggestions now but…perhaps find some ways to sustain interest…to remind us that we’re doing this for best of patients.” Doctor06.
“If I have discussions with my peers regarding cases then I will, like, refer back to the…to…the CPG and things like that…I think the conference…or the…forums they are also a very good source of information.” Nurse03.

To our knowledge, this is the first study conducted in Singapore to investigate the primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice. Clinicians’ mostly researched information on conditions such as unusual skin rashes, rare diseases, paediatrics, and women's health. Most clinicians searched clinical information at the point-of-care daily for a variety of reasons, including personal interest, clarification of doubts, or self-improvement. Sources of information included CPGs, online evidence-based resources, the internet, peers, and smartphones. Although CPGs were clinicians' preferred sources of information, they did not refer to them regularly and only did so in memory or when the guidelines were updated. We also found that using smartphones for seeking clinical information was commonly reported among clinicians. The barriers to primary care clinicians’ information-seeking process were the lack of time, internet surfing separation of work computers, limited search function of their organisation’s server, and limited access to medical literature databases. The facilitators to primary care clinicians’ information-seeking process were convenience, ease of access, and the trustworthiness of the information sources.

Like other studies [ 3 , 8 , 20 , 40 , 41 ], we found that the choice of information sources was affected by the trustworthiness and availability of resources. CPGs were preferred among clinicians as they were written by experts or specialists in their field. However, some clinicians felt that CPGs were too lengthy to be used at the point-of-care, outdated, and difficult to locate on their organisation's server. Additionally, clinicians only referred to CPGs recalled from memory or when they were updated. This highlights the importance of providing an alternative evidence-based clinical resource that is succinct and easy to refer to at the point-of-care [ 42 ]. Using medical apps for the provision of point-of-care summaries may mitigate the challenges of using CPGs for clinical information. Correspondingly, clinicians in the polyclinics commonly referred to the UpToDate® app provided by their organisation as a point-of-care resource they could use on their smartphones. Evidence-based point-of-care resources are commonly presented in key point summaries, follow formal categorisation of medical conditions, and provide references [ 43 ]. Limited research has shown that it was beneficial to integrate UpToDate® searches into daily clinical practice [ 42 ]. Additionally, the American Accreditation Commission International's @TRUST programme is one framework designed to encourage trustworthy online content. It is an invaluable resource for both individuals looking for health information online and organisations attempting to deliver trustworthy content [ 44 ]. However, continual efforts are required to encourage its use and ensure that individuals have access to accurate and reliable health information online. Therefore, future studies should investigate the quality of existing medical apps in providing point-of-care summaries and the effects of their use in the primary care setting.

We also found that clinicians were seeking clinical information on their smartphones. This is not surprising as Singapore’s public healthcare institutions enforce internet surfing separation on work computers. Furthermore, with the high penetration of smartphones in Singapore [ 45 ], these devices became the next best alternative for clinicians to seek online clinical information. Clinicians in the polyclinics frequently cited using UpToDate® app and the Google search engine on their smartphones. Similar to another study [ 46 ], we found that doctors often used Google images on their smartphones to identify less common rashes. Additionally, our study found that clinicians use Google images to educate patients. However, clinicians in the polyclinic reported privacy and professionalism concerns as barriers to using smartphones for clinical consultations. These findings were consistent with a systematic review assessing the challenges and opportunities of using mobile devices by healthcare professionals [ 47 ]. Despite the internet surfing separation in public healthcare institutions in Singapore and the availability various information sources, we found similar barriers to clinicians seeking clinical information with other studies [ 3 , 20 , 48 ]. Future research may focus on addressing specific barriers to using various mobile devices by primary care clinicians at the point-of-care.

Finally, smartphones may be an important information-seeking channel for healthcare professionals, and the hospital or government may be forced to establish legislation to protect healthcare professionals who use smartphones in clinical practice. Compliance with legislation governing smartphone use at work may be examined during the evaluation process for healthcare professionals. Guidelines on smartphone use among healthcare professionals can be tailored to individual conditions, such as patients' permission to share medically sensitive information via text. As a result, guidelines could be based on best practice claims and common actionable statements. Additionally, this study suggests that clinicians have, for the most part, been left to navigate information access on their responsibility, which may not be the most effective. Developing a more robust culture of evidence-based medicine within the organisation is essential and ought to be explicitly promoted moving forward. It could be beneficial for clinicians to receive organised training on effective information-seeking strategies and resources.

Our study has several strengths and limitations. Our strength is that we employed an in-depth interview approach and an open-ended style of questioning. The interactive nature of our interviews provided richer context and room for free responses from the interviewees. We were then able to critically scrutinise the conversations and provide insights that were helpful in the final analysis of themes.

There are several limitations. Firstly, we did not explore the influence of gender and age in the participants’ information-seeking behaviour, which has been demonstrated in other research in this area [ 14 ]. Secondly, the study was limited by environmental factors in the workplace, such as internet and information access. Finally, there may be possible social desirability bias, whereby the participants may have presented responses that were more socially appropriate than their actual thoughts on the issues explored during the interviews.

We found that clinicians frequently sought answers to clinical queries arising from patient care. However, the choice of information sources was influenced by the trustworthiness and availability of the resources. Clinicians in the polyclinic commonly reported using their smartphones for practice. Using UpToDate® app and Google search engine was commonly cited as their preferred clinical information sources due to its convenience and accessibility. While our findings may have been reported in other contexts, there are significant and novel elements when compared to healthcare around the world. For example, the implementation of internet surfing separation in public healthcare institutions raises concerns regarding clinicians' usage of smartphones, as well as their privacy and professionalism. This may lead us to examine the need for some regulation and training on the use of smartphones among clinicians, as well as the necessity to investigate this further from the patient's perspective. Future studies to improve access to evidence-based clinical information sources other than CPGs should be explored to address the information needs of primary care clinicians. Studies examining trustworthiness and effectiveness of using app-based point-of-care information summaries and exploring the impact of using mobile devices for information-seeking by clinicians at the point-of-care will also be useful to address the information-seeking needs of primary care clinicians. Furthermore, Large Language Model (LLM)-based artificial intelligence (AI) systems, such as ChatGPT, are increasingly being developed and used. They are used in various disciplines, including healthcare. Some, such as AMIE (Articulate Medical Intelligence Explorer) and Pathways Language Model (Med-PaLM 2), have been developed specifically for healthcare [ 49 , 50 , 51 ]. More research into the usage of AI among clinicians is needed to assure trust, dependability, and ethical conduct.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due the fact that all data obtained during the course of this study is strictly confidential and will be kept by the study team at the end of the study for at least 6 years and disposed of according to the Personal Data Protection Act in Singapore. Data are however available from Associate Professor Tang Wern Ee (co-author) upon reasonable request and with permission of the ethics committee of National Healthcare Group Domain Specific Review Board (the central ethics committee).

Abbreviations

General Practitioner

Clinical practice guidelines

Ministry of Health. Primary healthcare services Singapore 2022 [updated 31/05/2022]. Available from: https://www.moh.gov.sg/home/our-healthcare-system/healthcare-services-and-facilities/primary-healthcare-services .

González-González AI, Dawes M, Sánchez-Mateos J, Riesgo-Fuertes R, Escortell-Mayor E, Sanz-Cuesta T, et al. Information needs and information-seeking behavior of primary care physicians. The Annals of Family Medicine. 2007;5(4):345–52.

Article   PubMed   Google Scholar  

Daei A, Soleymani MR, Ashrafi-rizi H, Zargham-Boroujeni A, Kelishadi R. Clinical information seeking behavior of physicians: A systematic review. Int J Med Informatics. 2020;139:104144.

Article   Google Scholar  

Amiel JM, Andriole DA, Biskobing DM, Brown DR, Cutrer WB, Emery MT, et al. Revisiting the core entrustable professional activities for entering residency. Acad Med. 2021;96(7S):S14–21.

College of family physicians singapore. Fellowship Programme (FCFPS) Singapore2022 [updated 2022]. Available from: https://www.cfps.org.sg/programmes/fellowship-programme-fcfps/ .

American Library Association. Information Literacy Competency Standards for Nursing Unites States of America2013 Available from: https://www.ala.org/acrl/standards/nursing .

Braun L, Wiesman F, den Herik Van H, Hasman A. Avoiding literature overload in the medical domain. Stud Health Technol Inform. 2006;124:497–502.

Clarke MA, Belden JL, Koopman RJ, Steege LM, Moore JL, Canfield SM, et al. Information needs and information-seeking behaviour analysis of primary care physicians and nurses: a literature review. Health Info Libr J. 2013;30(3):178–90.

Ely JW, Burch RJ, Vinson DC. The information needs of family physicians: case-specific clinical questions. J Fam Pract. 1992;35(3):265–9.

CAS   PubMed   Google Scholar  

Del Fiol G, Workman TE, Gorman PN. Clinical questions raised by clinicians at the point of care: a systematic review. JAMA Intern Med. 2014;174(5):710–8.

AI-Dousari E. Information Needs and Information Seeking Behaviour of Doctors in Kuwait Government Hospitals: An Exploratory Study: Loughborough University; 2009.

Young JM, Ward JE. Evidence-based medicine in general practice: beliefs and barriers among Australian GPs. J Eval Clin Pract. 2001;7(2):201–10.

Article   CAS   PubMed   Google Scholar  

Ellsworth MA, Homan JM, Cimino JJ, Peters SG, Pickering BW, Herasevich V. Point-of-care knowledge-based resource needs of clinicians: a survey from a large academic medical center. Appl Clin Inform. 2015;6(2):305–17.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Le JV, Pedersen LB, Riisgaard H, Lykkegaard J, Nexoe J, Lemmergaard J, et al. Variation in general practitioners’ information-seeking behaviour - a cross-sectional study on the influence of gender, age and practice form. Scand J Prim Health Care. 2016;34(4):327–35.

Article   PubMed   PubMed Central   Google Scholar  

Bruin-Huisman L, Abu-Hanna A, van Weert H, Beers E. Potentially inappropriate prescribing to older patients in primary care in the Netherlands: a retrospective longitudinal study. Age Ageing. 2017;46(4):614–9.

PubMed   Google Scholar  

Cahir C, Bennett K, Teljeur C, Fahey T. Potentially inappropriate prescribing and adverse health outcomes in community dwelling older patients. Br J Clin Pharmacol. 2014;77(1):201–10.

Davies K. The information-seeking behaviour of doctors: a review of the evidence. Health Info Libr J. 2007;24(2):78–94.

Gill P, Dowell AC, Neal RD, Smith N, Heywood P, Wilson AE. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ. 1996;312(7034):819–21.

Salisbury C, Bosanquet N, Wilkinson E, Bosanquet A, Hasler J. The implementation of evidence-based medicine in general practice prescribing. Br J Gen Pract. 1998;48(437):1849–52.

CAS   PubMed   PubMed Central   Google Scholar  

Aakre CA, Maggio LA, Fiol GD, Cook DA. Barriers and facilitators to clinical information seeking: a systematic review. J Am Med Inform Assoc. 2019;26(10):1129–40.

Scott SD, Grimshaw J, Klassen TP, Nettel-Aguirre A, Johnson DW. Understanding implementation processes of clinical pathways and clinical practice guidelines in pediatric contexts: a study protocol. Implement Sci. 2011;6(1):133.

O’Brien JA, Jacobs LM Jr, Pierce D. Clinical practice guidelines and the cost of care: a growing alliance. Int J Technol Assess Health Care. 2000;16(04):1077–91.

Langley C, Faulkner A, Watkins C, Gray S, Harvey I. Use of guidelines in primary care–practitioners’ perspectives. Fam Pract. 1998;15(2):105–11.

Al-Ghamdi S. Popularity and impact of using smart devices in medicine: experiences in Saudi Arabia. BMC Public Health. 2018;18(1):531.

Ozdalga E, Ozdalga A, Ahuja N. The smartphone in medicine: a review of current and potential use among physicians and students. J Med Internet Res. 2012;14(5):e128.

Hedhli A, Nsir S, Ouahchi Y, Mjid M, Toujani S, Dhahri B. Contribution of mobile applications to learning and medical practice. Tunis Med. 2021;99(12):1134–40.

PubMed   PubMed Central   Google Scholar  

Liu Y, Ren W, Qiu Y, Liu J, Yin P, Ren J. The Use of Mobile Phone and Medical Apps among General Practitioners in Hangzhou City, Eastern China. JMIR mHealth uHealth. 2016;4(2):e64.

Ventola CL. Mobile devices and apps for health care professionals: uses and benefits. P T. 2014;39(5):356–64.

Gagnon MP, Pluye P, Desmartis M, Car J, Pagliari C, Labrecque M, et al. A systematic review of interventions promoting clinical information retrieval technology (CIRT) adoption by healthcare professionals. Int J Med Informatics. 2010;79(10):669–80.

Division NPaT. Population in Brief 2023: Key Trends 2023 [updated 29 Sep 2023]. Available from: https://www.population.gov.sg/media-centre/articles/population-in-brief-2023-key-trends/#:~:text=Overall%2C%20Singapore's%20total%20population%20stood,5.0%25%20increase%20from%20June%202022 .

Department SR. Number of smartphone users in Singapore from 2019 to 2028 2023 [updated 12 Sep 2023]. Available from: https://www.statista.com/statistics/494598/smartphone-users-in-singapore/#:~:text=In%202022%2C%20the%20number%20of,over%206.16%20million%20by%202028 .

Maggio LA, Aakre CA, Del Fiol G, Shellum J, Cook DA. Impact of electronic knowledge resources on clinical and learning outcomes: systematic review and meta-analysis. J Med Internet Res. 2019;21(7):e13315.

Health Mo. Temporary internet surfacing separation implemented at all public healthcare clusters 2018 [updated 07/11/2022]. Available from: https://www.moh.gov.sg/news-highlights/details/temporary-internet-surfacing-separation-implemented-at-all-public-healthcare-clusters .

Booth A, Hannes K, Harden A, Noyes J, Harris J, Tong A. COREQ (Consolidated Criteria for Reporting Qualitative Studies). Guidelines for Reporting Health Research: A User's Manual2014. p. 214–26.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52(4):1893–907.

Burnard P. A method of analysing interview transcripts in qualitative research. Nurse Educ Today. 1991;11(6):461–6.

Ely JW, Osheroff JA, Gorman PN, Ebell MH, Chambliss ML, Pifer EA, et al. A taxonomy of generic clinical questions: classification study. BMJ. 2000;321(7258):429–32.

Korstjens I, Moser A. Series: Practical guidance to qualitative research. Part 2: Context, research questions and designs. Eur J Gen Pract. 2017;23(1):274–9.

Wolters Kluwer. UpToDate: Industry-leading clinical decision support 2023 Available from: https://www.wolterskluwer.com/en/solutions/uptodate .

Dawes M, Sampson U. Knowledge management in clinical practice: a systematic review of information seeking behavior in physicians. Int J Med Informatics. 2003;71(1):9–15.

Correa VC, Lugo-Agudelo LH, Aguirre-Acevedo DC, Contreras JAP, Borrero AMP, Patiño-Lugo DF, et al. Individual, health system, and contextual barriers and facilitators for the implementation of clinical practice guidelines: a systematic metareview. Health Res Policy Syst. 2020;18(1):74.

Low S, Lim T. Utility of the electronic information resource UpToDate for clinical decision-making at bedside rounds. Singapore Med J. 2012;53(2):116–20.

Campbell JM, Umapathysivam K, Xue Y, Lockwood C. Evidence-Based Practice Point-of-Care Resources: A Quantitative Evaluation of Quality, Rigor, and Content. Worldviews Evid Based Nurs. 2015;12(6):313–27.

American Accreditation Commission International. @TRUST Certificate 2024 [updated 2024]. Available from: https://aacihealthcare.com/certificates/c173-2022-trust-usa/ .

Statista Research Department. Smartphone market in Singapore-Statistics and facts 2022 [updated 30/08/2022]. Available from: https://www.statista.com/topics/5842/smartphones-in-singapore/#dossierKeyfigures .

Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: a focus group study. PLoS ONE. 2013;8(11):e80318.

Gagnon M-P, Ngangue P, Payne-Gagnon J, Desmartis M. m-Health adoption by healthcare professionals: a systematic review. J Am Med Inform Assoc. 2015;23(1):212–20.

Brassil E, Gunn B, Shenoy AM, Blanchard R. Unanswered clinical questions: a survey of specialists and primary care providers. J Med Libr Assoc. 2017;105(1):4–11.

Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930–40.

Tu T, Palepu A, Schaekermann M, Saab K, Freyberg J, Tanno R, et al. Towards conversational diagnostic ai. arXiv preprint arXiv:240105654. 2024.

Li J, Dada A, Puladi B, Kleesiek J, Egger J. ChatGPT in healthcare: A taxonomy and systematic review. Comput Methods Programs Biomed. 2024;245:108013.

Download references

Acknowledgements

Not applicable.

This study is funded by Seedcorn Grant Centre for Primary Health Care Research and Innovation, a joint Lee Kong Chian School of Medicine, and the National Healthcare Group Polyclinics Initiative.

Author information

Authors and affiliations.

Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Novena Campus Clinical Sciences Building 11 Mandalay Road, Singapore, 308232, Singapore

Mauricette Moling Lee & Lorainne Tudor Car

Singapore Institute of Technology, 10 Dover Drive, Singapore, 138683, Singapore

Mauricette Moling Lee

Clinical Research Unit, National Health Group Polyclinics (HQ), 3 Fusionopolis Link, Nexus @ One-North, Singapore, 138543, Singapore

Wern Ee Tang

Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Novena Campus Clinical Sciences, Building 11 Mandalay Road, Singapore, 308232, Singapore

Helen Elizabeth Smith

Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, UK

Lorainne Tudor Car

You can also search for this author in PubMed   Google Scholar

Contributions

Lorainne Tudor Car conceived the idea for this study. Tang Wern Ee contributed to the design of the work and the acquisition of the data. Mauricette Lee collected the data, analysed it and wrote the manuscript with support from Tang Wern Ee, Helen Smith, and Lorainne Tudor Car. Lorainne Tudor Car and Tang Wern Ee supervised the project.

Corresponding author

Correspondence to Lorainne Tudor Car .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the National Healthcare Group Domain Specific Review Board (the central ethics committee). National Healthcare Group Domain Specific Review Board Reference Number: 2018/01355. Informed consent was obtained from all participants. All methods were carried out in accordance with relevant guidelines and regulations, in accordance with the Declaration of Helsinki.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Lee, M.M., Tang, W.E., Smith, H.E. et al. Identifying primary care clinicians’ preferences for, barriers to, and facilitators of information-seeking in clinical practice in Singapore: a qualitative study. BMC Prim. Care 25 , 172 (2024). https://doi.org/10.1186/s12875-024-02429-x

Download citation

Received : 22 December 2022

Accepted : 12 May 2024

Published : 18 May 2024

DOI : https://doi.org/10.1186/s12875-024-02429-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-based medicine
  • Information-seeking behaviour

BMC Primary Care

ISSN: 2731-4553

the number of participants in qualitative research

A qualitative analysis of post-hoc interviews with multilevel participants of a randomized controlled trial of a community-based intervention

Affiliations.

  • 1 Keck School of Medicine, University of Southern California, Los Angeles, California, United States of America.
  • 2 Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, California, United States of America.
  • 3 Los Angeles County Department of Public Health, Division of Chronic Disease and Injury Prevention, Los Angeles, California, United States of America.
  • 4 Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California, United States of America.
  • 5 Department of Epidemiology, Los Angeles (UCLA) Fielding School of Public Health, University of California, Los Angeles, California, United States of America.
  • 6 Department of Family Medicine, David Geffen School of Medicine at UCLA, Los Angeles, California, United States of America.
  • 7 UCLA Clinical and Translational Science Institute, Population Health Program, Los Angeles, California, United States of America.
  • 8 Department of Health Policy and Management, UCLA Fielding School of Public Health, Los Angeles, California, United States of America.
  • 9 Associate Dean for Clinical Affairs, University of Southern California Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences, Los Angeles, California, United States of America.
  • PMID: 38723012
  • PMCID: PMC11081230
  • DOI: 10.1371/journal.pone.0303075

Introduction: Community-based health interventions often demonstrate efficacy in clinical trial settings but fail to be implemented in the real-world. We sought to identify the key operational and contextual elements of the Los Angeles Barbershop Blood Pressure Study (LABBPS), an objectively successful community-based health intervention primed for real-world implementation. LABBPS was a cluster randomized control trial that paired the barbers of Black-owned barbershops with clinical pharmacists to manage uncontrolled hypertension in Black male patrons, demonstrating a substantial 21.6 mmHg reduction in systolic blood pressure. Despite this success, the LABBPS intervention has not expanded beyond the original clinical trial setting. The aim of this study was to determine the facilitating and limiting factors to expansion of the LABBPS intervention.

Methods: We undertook a qualitative assessment of semi-structured interviews with study participants performed after trial completion. Interviews included a total of 31 participants including 20 (6%) of the 319 LABBPS program participants ("patrons"), 10 (19%) barbers, and one (50%) clinical pharmacist. The semi-structured interviews were focused on perceptions of the medical system, study intervention, and influence of social factors on health.

Results: Several common themes emerged from thematic analysis of interview responses including: importance of care provided in a convenient and safe environment, individual responsibility for health and health-related behaviors, and engagement of trusted community members. In particular, patrons reported that receiving the intervention from their barber in a familiar environment positively influenced the formation of relationships with clinical pharmacists around shared efforts to improve medication adherence and healthy habits. All interviewee groups identified the trust diad, comprising the familiar environment and respected community member, as instrumental in increasing health-related behaviors to a degree not usually achieved by traditional healthcare providers.

Discussion: In conclusion, participants of an objectively successful community-based intervention trial consistently identified key features that could facilitate wider implementation and efficacy: social trust relationships, soliciting insights of trust bearers, and consistent engagement in a familiar community setting. These findings can help to inform the design and operations of future community-based studies and programs aiming to achieve a broad and sustainable impact.

Copyright: © 2024 Kohrman et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Publication types

  • Randomized Controlled Trial
  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't
  • Black or African American
  • Blood Pressure
  • Hypertension* / drug therapy
  • Hypertension* / therapy
  • Interviews as Topic
  • Los Angeles
  • Middle Aged
  • Pharmacists / psychology
  • Qualitative Research

Grants and funding

  • Open access
  • Published: 13 May 2024

“ We might not have been in hospital, but we were frontline workers in the community ”: a qualitative study exploring unmet need and local community-based responses for marginalised groups in Greater Manchester during the COVID-19 pandemic

  • Stephanie Gillibrand 1 ,
  • Ruth Watkinson 2 ,
  • Melissa Surgey 2 ,
  • Basma Issa 3 &
  • Caroline Sanders 2 , 4  

BMC Health Services Research volume  24 , Article number:  621 ( 2024 ) Cite this article

160 Accesses

2 Altmetric

Metrics details

The response to the COVID-19 pandemic saw a significant increase in demand for the voluntary, community, faith and social enterprise (VCFSE) sector to provide support to local communities. In Greater Manchester (GM), the VCFSE sector and informal networks provided health and wellbeing support in multiple ways, culminating in its crucial supportive role in the provision of the COVID-19 vaccination rollout across the GM city region. However, the support provided by the VCFSE sector during the pandemic remains under-recognised. The aims of the study were to: understand the views and experiences of marginalised communities in GM during the COVID-19 pandemic; explore how community engagement initiatives played a role during the pandemic and vaccine rollout; assess what can be learnt from the work of key stakeholders (community members, VCFSEs, health-system stakeholders) for future health research and service delivery.

The co-designed study utilised a participatory approach throughout and was co-produced with a Community Research Advisory Group (CRAG). Focus groups and semi-structured interviews were conducted remotely between September-November 2021, with 35 participants from local marginalised communities, health and care system stakeholders and VCFSE representatives. Thematic framework analysis was used to analyse the data.

Local communities in GM were not supported sufficiently by mainstream services during the course of the COVID-19 pandemic, resulting in increased pressure onto the VCFSE sector to respond to local communities’ need. Community-based approaches were deemed crucial to the success of the vaccination drive and in providing support to local communities more generally during the pandemic, whereby such approaches were in a unique position to reach members of diverse communities to boost uptake of the vaccine. Despite this, the support delivered by the VCFSE sector remains under-recognised and under-valued by the health system and decision-makers.

Conclusions

A number of challenges associated with collaborative working were experienced by the VSCE sector and health system in delivering the vaccination programme in partnership with the VCFSE sector. There is a need to create a broader, more inclusive health system which allows and promotes inter-sectoral working. Flexibility and adaptability in ongoing and future service delivery should be championed for greater cross-sector working.

Peer Review reports

The response to the COVID-19 pandemic saw a significant increase in demand for the voluntary, community, faith and social enterprise (VCFSE) sector to provide support to local communities [ 1 , 2 ]. The role of communities was seen as crucial to supporting the pandemic response, to better mobilise public health pandemic responses and supportive health services [ 3 ]. VCFSE organisations nationally had to quickly mobilise to adapt their service offer to meet increased demand, new gaps in service provision and deliver services in different ways to address the challenges faced by local communities. These included loss of income and financial hardship, closure of schools and childcare, increased social isolation, digital exclusion, and increased mental health issues [ 4 ]. However, previous research has concluded that support provided by the voluntary sector during the pandemic has been under-recognised [ 5 ]. Some authors have explored the role that VCFSEs played at the national level, in supporting communities during the pandemic [ 4 , 5 , 6 ]. Yet, whilst it is well-known that tens of thousands of UK volunteers supported local vaccine delivery [ 7 ], no existing academic literature has explored the role of VCFSEs in supporting the vaccination rollout.

We focus on Greater Manchester (GM), where increased support from VCFSE organisations, including smaller, community-based networks, responded to increased demand from local communities and the NHS to provide key health and wellbeing-related services, including food and care packages for clinically vulnerable households, food bank services, support for people experiencing homelessness, mental health and domestic violence services and support to local community organisations [ 8 ]. This support culminated in the sector’s supportive role in the delivery of the COVID-19 vaccination rollout, in response to the need for mass immunisation across the region.

Over the last decade, the English health and care system has been evolving to integrate health and social care. A key focus is building closer working relationships between the NHS, local authorities and other providers– including the VCFSE sector– to deliver joined up care for communities [ 9 , 10 ]. To aid integration, a new model for organising health and care on different geographical footprints has been developed: Integrated Care Systems (ICSs), place-based partnerships and neighbourhood models. These collaborative partnerships bring together existing health and care organisations to coordinate health and care planning and delivery in a more integrated way and include councils, NHS provider trusts, Primary Care Networks, GP federations and health and care commissioners [ 11 ]. These new geographically-based partnerships have an emphasis on collaborative working beyond traditional health and care partners. This includes acknowledging the role that VCFSE organisations can have in supporting wider population wellbeing, particularly as part of multi-disciplinary neighbourhood teams embedded in local communities [ 12 ]. National guidance on the development of ICSs and place-based partnerships strongly encourages health and care leaders to include VCFSE organisations in partnership arrangements and embed them into service delivery [ 12 ]. In GM, the partnership working approach pre-dates the formal mandating of ICSs, with a combined authority which brings together the ten local authorities and an association of Clinical Commissioning Groups (CCGs) which represented health commissioners, and a VCFSE umbrella group which also operates as a joint venture to represent the sector’s interests at a GM level Footnote 1 . However, reorganisation to the ICS system may present new local challenges for the VCFSE sector to find a meaningful ‘seat at the table’. That withstanding, the COVID-19 pandemic coincided with the development of ICSs and place-based partnerships as arguably one of the earliest and most intense tests of partnership working across health and care organisations within the current policy landscape.

Here, we present findings from a co-designed qualitative research project, drawing on insights from 35 participants, including members of diverse communities in GM, VCFSE participants, and key decision-making health and care system stakeholders. The aims of the study were to: understand the views and experiences of marginalised communities in GM during the COVID-19 pandemic; explore how community engagement initiatives played a role during the pandemic and vaccine rollout; assess what can be learnt from the work of key stakeholders (including community members, VCFSEs, health and care system stakeholders) for future health research and service delivery. The rationale for the study developed from a related piece of work assessing inequalities in the COVID-19 vaccine uptake in GM [ 13 ]. At that time, there was little research on the experiences of under-served communities during the pandemic. As such, the public and stakeholder engagement for the related project identified a need for a qualitative workstream to explore more fully the drivers behind and context surrounding the vaccination programme in GM, centring also local communities’ experiences during the pandemic (explored in a related paper [ 14 ]).

In this paper, we examine the role the VCFSE sector played in supporting unmet needs for marginalised groups in GM during the COVID-19 pandemic and as part of the rapid rollout of the COVID-19 vaccination programme. We consider the opportunities and barriers that may influence the full integration of the VCFSE sector into health and care services in the future. This paper provides additional evidence around the role of local community-led support in the context of identified unmet needs from marginalised local communities. Whilst focused on GM, it provides an exemplar of the role of VCFSEs and community networks during the pandemic, with relevant learning for other regions and international settings with place-based partnerships.

Study design

The study utilised a participatory approach throughout and was co-designed and co-produced with a diverse Community Research Advisory Group (CRAG). The CRAG were members of local community groups who were disproportionately impacted by the COVID-19 pandemic, including one member who is a co-author on this paper. This included members of three VCFSE organisations working with specific ethnic minority communities including Caribbean and African, South Asian and Syrian communities.

CRAG members acted as champions for the research, supporting design of appropriate information and fostering connections for recruitment via their existing community networks. The strong partnerships built through our approach were crucial to enabling a sense of trust and legitimacy for the research amongst underserved communities invited to participate.

Interviews and focus groups took place between September-November 2021 and sought to explore: the context surrounding the rollout of the vaccination programme; key aspects of support delivered as part of the vaccination programme; the use of localised approaches to support vaccine delivery including engagement initiatives, as well as broader community-level responses to the COVID-19 pandemic; perceptions around barriers to vaccine uptake Footnote 2 ; experiences of local communities (including healthcare) during the pandemic Footnote 3 . During the data collection period, national pandemic restrictions were largely lifted with no restrictions on social distancing or limits to gatherings, and all public venues reopened. A self-isolation period of 10 days after a positive COVID-19 test remained a legal requirement, but self-isolation after contact with a positive case was not required if fully vaccinated [ 15 ]. By July 2021, every UK adult had been offered their first dose of the COVID-19 vaccine, with every adult offered both doses by mid-September 2021 [ 16 ]. By early September 2021, more than 92 million doses had been administered in the UK [ 15 ].

Interviews and focus groups were conducted by one member of the research team (SG) and were conducted remotely due to the pandemic, via Zoom and telephone calls. The limitations of undertaking remote qualitative research interviews are acknowledged in academic literature, including potential restrictions to expressing compassion and assessing the participant’s environment [ 17 , 18 ]. However, given the remaining prevalence of COVID-19 at the time of interview, it was judged that the ensuing risk posed by COVID-19 to both researchers and participants outweighed the potential drawbacks. Nevertheless, participants were offered face-to-face options if they were unable to participate remotely to maximise inclusion (although no participants chose to participate face-to-face).

Interviews and focus groups were audio recorded with an encrypted recorder and transcribed by a professional transcription service. Informed written consent to participate was taken prior to the interviews and focus groups. The average length of the interviews was 34 min and average length of the focus groups was 99 min. Two focus groups were co-facilitated by a CRAG member, a member of the local community who works for a mental health charity that supports local South Asian communities, who also provided translation support. In respect to authors positionality, coauthors SG, RW, MS and CS are university researchers in academic roles and had prior links to the CRAG members via a wider community forum (co-ordinated by the NIHR funded Applied Research Collaboration for Greater Manchester). The wider group met regularly to discuss and share learning regarding community experiences, community action and related research during the pandemic. BI is a member of the CRAG and a member of a local Syrian community.

Sampling & recruitment

The sampling strategy for community participants centred around groups that had been disproportionately affected by the COVID-19 pandemic in England, including ethnic minority groups, young adults, and those with long-term physical and mental health conditions. VCFSE participants included community and religious leaders, members of local community VCFSE organisations and smaller, informal community networks and groups from local communities. Health and care system stakeholders included local council workers and health and care system stakeholders (e.g. those organising the vaccination response in CCGs and GP Federations). Characteristics of the sample are provided in Table  1 . Overall, the study achieved a diverse sample of participants on the basis of gender and ethnicity.

A combination of purposive and snowballing sampling was used to recruit via pre-established links and connections to community networks and stakeholders to ensure the inclusion of specific seldom-heard groups. For example, members of African and Caribbean communities were recruited via a charity which supports the health of these groups, and members of South Asian communities were recruited via a mental health charity.

Quotes are described by respondent type (community member, VCFSE participant, health and care system stakeholder) and participant identifier number to maintain anonymity whilst providing important contextual detail.

Data analysis

We analysed the data using an adapted framework approach [ 19 ]. We adopted a framework approach to analysis as this is viewed as a helpful method when working within large multidisciplinary teams or when not all members of the team have experience of qualitative data analysis, as was the case within our team. This structured thematic approach is also considered valuable when handling large volumes of data [ 20 , 21 ] and was found to be a helpful way to present, discuss and refine the themes within the research team and CRAG meetings. We created an initial list of themes from coding four transcripts, and discussions with CRAG members: personal or family experiences/stories; work/education experiences; racism and racialised experiences; trust and mistrust; fear and anxiety; value of community/community approaches; access to services including healthcare; operational and logistical factors around vaccine rollout; communication and (mis)information. We used this set of themes and sub themes to code the remaining transcripts, including further inductively generated codes as analysis progressed, regularly discussing within the team.

We shared transcript coding amongst the study team, with one team member responsible for collating coded transcripts into a charting framework of themes/subthemes with illustrative transcript extracts. The themes were refined throughout the analysis period (November 2021-March 2022) with the research team and CRAG and were sense-checked with CRAG members and the wider study team, to synthesise a final iteration of the themes and sub-themes (see supplementary material). We present findings related to five overarching themes: (1) unmet needs of local communities during the pandemic: inaccessible care and distrust; (2) community-led approaches: social support and leadership to support services; (3) community led support to COVID-19 vaccination delivery; (4) operational and logistical barriers to community-based pandemic responses: challenges faced by the voluntary and community sector; (5) learning from the pandemic response in GM: trust building and harnessing community assets. Themes are discussed in more detail below.

Ethical approval

This study was approved by University of Manchester Ethics Committee (Proportionate University Research Ethics Committee) 24/06/21. Ref 2021-11646-19665.

Unmet needs of local communities during the pandemic: inaccessible care and distrust

The COVID-19 pandemic brought an unprecedented shift in the way NHS services could function due to social distancing and lockdown measures. Pressures included unprecedented demand on hospital capacity and infection control measures (within hospitals and across the NHS) which reduced workforce capacity. There were also staff shortages due to high levels of COVID-19 infection amongst NHS staff, and shortages in non-acute capacity due to staff re-deployment [ 22 , 23 ]. In an effort to reduce pressure on the NHS, the policy mantra “Protect the NHS” was coined as a keynote slogan from the early stages of the pandemic [ 24 ].

It is within this context that many community participants raised (spontaneously) that there was a general inability to access health services during the pandemic, including GP and specialist services.

when I tried to contact my doctor’s surgery I was on the call for over an hour, number 20, number 15. Then by the time I’m under ten I get cut off. And it happened continuously. I just couldn’t get through and I just gave up really…now it’s like a phone consultation before you can even go and see someone, and even for that you’re waiting two, three weeks. (1029, VCFSE participant)

This resulted in frustration amongst some community participants, who questioned the logic of “protecting the NHS”, seemingly at the expense of their health-related needs. This led to sentiments that other health needs were de-prioritised by decision-makers during the pandemic. It was felt that this logic was counter-productive and fell short of the principles of protecting the most vulnerable.

We were like it just didn’t matter, it could have been much more serious than just a cough or a cold, [] but the help was just not there” (1028, community participant). what about people who actually need to see a doctor so the very vulnerable ones that we’re supposed to be protecting. Yes, we’re protecting the NHS, I understand that, I said, but we’ve also got to protect all those vulnerable people that are out there that are actually isolated (1011, community participant).

Community participants described their fear of accessing healthcare service because of potential risks of catching the virus in these settings, and fear of insufficient care due to well-publicised pressures in NHS settings. Some VCFSE participants noted that the widely publicised pressures faced by the NHS, and heightened media and political attention around COVID-19 cases in health settings led to fear and anxiety Footnote 4 .

I didn’t go to the hospital because I was scared shitless whether I was going to come out alive from hospital.” (1023, community participant). …the number of people who didn’t access services when they should have done… They were either terrified they were going to go into hospital and catch COVID straightaway and die, or they were terrified that they were taking [the hospital space] away from someone else (2003, VCFSE participant).

Overall, this led to a strong sense that mainstream services were not supporting the needs of local communities. This was especially felt for those requiring specialist services (e.g. mental health or secondary services), and for those who had faced intersecting inequalities, such as health issues, language and digital/IT barriers, and newly settled refugees and immigrants.

Community-led approaches: social support and leadership to support services

As a consequence of this unmet need, VCFSE and community participants identified that local communities themselves increased activities to provide community support. Participants felt strongly that this increased support provided by the VCFSE sector and community networks remains under-recognised and under-valued by the health system and wider public.

BAME organisations were going around door to door, giving hand sanitisers, giving masks to everybody [ ]. And it was the BAME community that was the most active during COVID delivering medication, delivering food to houses, doing the shopping. [ ] Nobody gave credit to that. Nobody talks about the good work that the BAME community has done. (1020, community participant)

A number of community and VCFSE sector participants highlighted the work done at the community level, by either themselves or other networks to support local communities. This included providing support packages, running errands for vulnerable community members, cooking and food shopping services, a helpline and communication networks for local communities, and online wellbeing and support groups.

We might not have been in hospital, but we were frontline workers in the community. (1028, community participant)

Support was provided by formal VCFSE organisations and by smaller, sometimes informal, community networks and channels, in which support mechanisms included mental health support and wellbeing focused communications to combat loneliness and boost wellbeing. This was often focused around outreach and the provision of community-based support to the most marginalised and vulnerable groups that had been disproportionately impacted during the pandemic, e.g. recently settled refugees and asylum seekers, older individuals.

We have an Iranian group in Salford…And one of them spotted this young woman in the queue and she thought she looked Iranian, you know….anyway she started a conversation, and this person had been an asylum seeker at the beginning of the pandemic and had been in a detention centre during the pandemic. And then, finally got their leave to remain and then were just basically dumped in Salford. [ ] just having that friendly face and someone was trying to start that conversation, she was able to be linked into this group of women who support other refugees and asylum seekers from the Middle East. (2014, VCFSE participant)

Community led support to COVID-19 vaccination delivery

The VCFSE sector and community networks also played a crucial part in supporting the COVID-19 vaccine delivery. Community, VCFSE and system-sector participants recognised the unique role that the VCFSE sector had played in reaching diverse communities and sections of communities not reached by the mainstream vaccination programme. For example, VCFSE groups aided vaccine delivery by helping run vaccine ‘pop-up’ sites in community spaces including mosques and other religious sites, children’s centres, and local specialist charities (e.g.: refugee and sex worker charities).

The use of community ‘champions’ and community ‘connectors’ to convey messaging around the vaccination drive were deemed especially vital in this regard. Trusted members of communities (e.g. community leaders) who had crucial pre-existing communication channels were able to effectively interact with different parts of communities to advocate for the vaccine and address misinformation. Situated within communities themselves, these ‘champions’ held established trust within communities, allowing conversations surrounding the vaccine to be held on the basis of shared experiences, honesty, openness, compassion and understanding.

So, as with any ethnic minority community, unless you’re part of it, it’s almost impossible to completely dig out all its norms and its very, very fine distinctions…[ ] what is acceptable, what is not acceptable[ ]? Unless you’re part of it, or you’ve really immersed yourself in the culture for decades, it’s almost impossible to get it (2015, VCFSE participant) One of the strongest approaches that you can take to increase uptake in any community, whether it be pregnant women or a faith group or a geographical area or a cultural group, is that if you’ve got a representative from that community leading on and advocating for the vaccine, you’re going to have the best impact (2011, health and care system stakeholder participant). unless Imams or significant people in the community were coming out for them and saying, it’s absolutely fine, it’s safe, and culturally it’s the right thing to do, there was a bit of uncertainty there (2010, health and care system stakeholder participant).

Health and care system stakeholders also emphasised the importance of “community ownership” of vaccination approaches, and of system responsiveness to identified needs and priorities at the community level. Health and care system stakeholders recognised that they were able to utilise community links to have better on-the-ground knowledge, provided in real time, to supplement locally held data to inform targeted efforts to boost uptake. This included council led initiatives including door-knocking with council staff, local health improvement practitioners, and VCFSE representatives working together to provide information about vaccine clinics and register people for vaccine appointments.

if messages went out and they didn’t land right they [the VCFSE sector] could be the first people [that] would hear about that and they could feed that back to us. [ ]….we were able to regularly go to them and say, look from a geographical perspective we can see these key areas…[ ] the people aren’t coming for vaccinations, [ ] what more can you tell us. Or, we can say, from these ethnicities in this area we’re not getting the numbers, what more can you tell us. And when we’ve fed them that intelligence then they could then use that to go and gain further insight for us, so they were a kind of, key mechanism (2010, health and care system participant).

Operational and logistical barriers to community-based pandemic responses: challenges faced by the voluntary and community sector

VCFSE sector and health and care system stakeholder participants reported significant logistical barriers to partnership working to support communities during the pandemic. Barriers included red tape and bureaucracy, which delayed responses to communities’ health and wellbeing needs.

whilst we were buying masks and hand sanitisers and going door to door, [ ] the council were still getting their paperwork in order, their policies in order, it was meeting after meeting. It took them seven to eight weeks for them to say [ ] we’ve got masks, would you like to help dish them out. (1029, VCFSE participant)

VCFSE and health and care system participants also raised challenges with respect to the VCFSE sector supporting the vaccination programme. This resulted in frustration amongst both VCFSE and health and care system participants who recognised the value of these community-based approaches.

The time that trickles through to the council and the time that the council turn around and say all right, we’ll actually let you do it was weeks later, and the community is turning round to us and saying to us well, what’s going on? We don’t like being messed around like this… (2008, VCFSE participant).

Participants highlighted the numerous health-related bodies with various roles which comprise a complex system for VCFSE partners to navigate, in part due to organisational and cultural clashes. Frustration was felt by both VCFSE and health and care system stakeholder participants (from local councils) in this respect. One VCFSE participant discussing the vaccine rollout noted:

We hit dead end after dead end within the council and there was literally very little response….You’ve got so many departments within this massive organisation called the council…[ ].it’s very difficult to navigate all that and deal with all that bureaucracy… (2008, VCFSE participant).

Broader institutional and organisational barriers to VCFSE support were identified, where cultural clashes between differing values and ways of working emerged, including ethos surrounding risk aversion and the system-level commitment to privilege value-for-money during the vaccination rollout. More practical issues around information governance and training were also raised as barriers to collaborative working.

I don’t think that they understand the power of community and the way community works. I don’t think that at a governmental level they understand what it means to penetrate into a community and actually understand what needs to be done to help a community…[ ] If they did and they had better links and ties into understanding that and helping that then we likely wouldn’t have had so many hurdles to get through (2008, VCFSE participant). ….in terms of public money, this is a public programme, we need to get value for the public pound. So we’re saying to [VCFSE organisation], how much is it going to cost? And [VCFSE organisation] are like, well, we don’t really know, until we deliver it. And we’re like, well, we can’t really approve it, until we know what it’s going to cost…. (2006, health and care system stakeholder participant)

Overall, these issues surmounted to difficulties of power-sharing between public sector organisations and VCFSEs during a time of rapid response to a public health crisis, political, institutional, and other external pressures. This was echoed amongst VCFSE and health and care system stakeholder participants, where frustration towards this was felt from both sides.

the public sector [ ] need to get better at letting go of some of the control. So even still, after I said, so many times, [VCFSE organisation] are delivering this, [VCFSE organisation] are doing everything, [ ] I still got the comms team going, are we doing a leaflet? No, [VCFSE organisation] are doing it, this is a [VCFSE organisation] programme, this isn’t a Council programme. (2006, local authority participant) it is difficult sometimes working with organisations, I find myself very much stuck in the middle sometimes [ ] I engage with [community groups] and ask them how best we do it and then we put things in place that they’ve asked for, and then they’ve told us it’s not working why have you done it like that. [ ] I think it’s acknowledgement to do it right, it takes time, and it takes effort, it takes resource. (2010, local authority participant)

Health and care system stakeholders also highlighted the importance of accessibility and localised vaccination hubs to reach different parts of diverse local communities e.g. sites in local mosques and sites near local supermarkets to reach different demographics. For instance, having mobile vaccination sites to reduce accessibility barriers, alongside dialogue-based initiatives to answer questions and respond to concerns from local communities about the vaccine, with the view to building trust without explicit pressure to receive the vaccine. Describing their efforts to engage with a member of the local community over the vaccine, two local health and care system stakeholders detailed the following example of how localised, communication-based approaches were deemed successful:

She came to the clinic and there were a lot of tears. It was very emotional. She’d been through a very difficult journey and had got pregnant by IVF, so it was a big decision for her, a big risk that she thought she was taking. Whether she took the vaccine or not, it felt like a risk to her, [ ] we were able to sit down and talk to her. We had some peers there. So we had other pregnant women there who’d had the vaccine, that were able to give her some confidence. We had the specialist multicultural midwife there, [ ] And we literally just sat and drank coffee with her and let her talk and she ended up agreeing to have the vaccine [ ] (2011, system-level stakeholder). …And the feedback from that lady was amazing. A couple of weeks ago I contacted her to make sure she was going to come down for her booster and she was just so grateful. [ ] she’d had backlash from her family and people within her community for taking up the vaccine and they still thought it was a massive risk. But she had no doubts that she’d done absolutely the right thing… (2012, system-level stakeholder).

Learning from the pandemic response in GM: trust building and harnessing community assets

Taking these findings from health and care system stakeholders, community and VCFSE participants, several learning points were identified.

In terms of vaccine delivery, some health and care system stakeholder participants reflected the need for more joined-up ways of working, across existing services and amongst VCFSE partners, to ensure efficiency and maximise uptake by embedding the vaccination programmes into other health services. For example, offering vaccination through health visiting or health checks, or offering COVID-19 vaccine boosters and flu vaccinations in single visits at care homes. These settings could also provide opportunities for dialogue with local communities where there is pushback against vaccination. Another health and care system stakeholder identified the need for greater joined up delivery of services; utilising the VCFSE sector to deliver multiple services simultaneously, including the vaccine, to improve vaccine uptake and access to other healthcare services:

the sex worker clinic is a good example of that. [ ] People were coming in for another reason, to get their health check and to get their support from the advisors there at that voluntary organisation, [ ]…if there’s a multiple purpose at the site, for people to attend, you can start to engage them in the conversation and then take the opportunity and vaccinate them. So I’m really interested in looking at that a little bit more, about how that can help to increase uptake. (2011, health and care system stakeholder participant)

A VCFSE participant suggested using educational settings such as schools as a channel to disseminate public health and vaccine-related information, as trusted settings which have wide-reach to many different communities.

A number of health and care system stakeholders, VCFSE and community participants noted that long-term, continuous, meaningful engagement is crucial to build longer-term trust between institutions and communities, and to improve the efficacy of public health measures. It was felt that more concentrated efforts were required from the NHS and other statutory organisations to reach the most marginalised and minoritised communities, for example through door-knocking and welfare calls. Participants highlighted that this was required not solely at times of public health crises, but as part of continued engagement efforts, in order to adequately engage with the most marginalised groups and effectively build long-term trust. This may be done most effectively by building on existing links to marginalised communities, for example using education liaison staff to understand traveller communities’ perspectives on the vaccine.

proactive engagement with communities both locally and nationally to say, [the health system] are looking at this, what’s people’s thoughts, views, you know, is there any issues with this, what more can we do, what do you need to know to make an informed decision. This is what we were thinking of, how would this land…I think we could learn by, [ ] doing that insight work, spending more time working with communities at a kind of, national, regional, and local level (2010, health and care system stakeholder participant). [the health system] could have engaged better with communities, I think bringing them in at the beginning. So, having them sat around the table, representatives from different groups, understanding how to engage with them from the very beginning…I think they could have used the data very very early on to inform who were engaging. We didn’t quite get it right at the beginning, we didn’t link the public health data teams with the comms and engagement teams (2013, health and care system stakeholder participant).

The tone of communications was also seen to be important. One health and care system stakeholder participant noted that the strategy of pushing communications and public health messaging aimed at behavioural change did not achieve the desired effect as these did not engage effectively with the communities to alleviate or address key concerns about the vaccine. These were deemed less successful than starting from a place of understanding and openness to generate constructive dialogue which could foster trust and respect.

There was also more specific learning identified in terms of collaboration between public sector institutions, VCFSEs and community links, with this seen as vital to build strong, long-term relationships between sectors based on trust and mutual respect. This should also involve working to share knowledge between sectors in real-time.

Health and care system stakeholder and VCFSE participants both suggested a failure to further develop partnerships fostered during the pandemic would be a lost opportunity that could potentially create distrust and additional barriers between communities, VCFSEs and public organisations, perhaps further marginalising seldom-heard groups.

we need to find ways which we have ongoing engagement, and I think it needs to be more informal. People don’t want to be just constantly asked and asked and asked (2010, health and care system stakeholder participant). a network of just sharing information and insight, rather than just engaging when you’ve got something specific to engage about. (2010, health and care system stakeholder participant) We were then thinking to ourselves, well, maybe we shouldn’t be doing this. If it’s going to cause us damage, if the council can’t work with us properly maybe we just shouldn’t do it. We’ve got to weigh up. We don’t want to lose our trust within the community (2008, VCFSE participant).

In terms of dynamics and working arrangements between sectors, participants thought it important to allow community organisations and VCFSEs to lead on their areas of speciality, e.g.: community organisations leading on outreach and communications within and to communities. This relates to the identified need of pursuing adaptable and flexible approaches to vaccine delivery. Moreover, there is a need to allow more joined-up decision-making between the health system and VCFSEs to ensure better use of local intelligence and improved planning.

Discussion & policy implications

Unmet need and the role of communities during the pandemic.

Our findings clearly demonstrate that local communities were not supported sufficiently by mainstream services during the COVID-19 pandemic. This in turn led to frustration, fear and loss of faith in the healthcare system as a whole, evidenced also in responses to the COVID-19 vaccination programme in which distrust results from wider experiences of historical marginalisation and structural inequalities [ 14 ]. In the absence of mainstream service support, our findings demonstrate how VCFSE organisations and community networks mobilised to support local communities to fulfil unmet health, social care, and wellbeing needs. This supports emerging evidence from across England which finds that the VCFSE sector played a key role in supporting communities during the pandemic [ 6 , 8 , 25 ].

The importance of community-based, localised approaches, community-led and community owned initiatives, ‘community champions’ and community connectors’ were also highlighted as crucial to the success of the COVID-19 vaccination drive. Participants noted that community-led approaches were uniquely positioned to reach some communities when mainstream approaches were unsuccessful. This is echoed in existing literature, where the role of localised community responses was deemed important to reach marginalised groups, as part of the wider pandemic response [ 26 ].

Operational and logistical barriers

Operational and logistical barriers created dissonance between communities and the system. These barriers included difficulties with decision-making and power-sharing between VCFSE and commissioning or clinical organisations, organisational cultural clashes, red-tape and bureaucracy, and complex systems and power structures to navigate. This builds on existing evidence of barriers to partnership working during the pandemic, including cultural clashes and bureaucracy/red tape [ 5 , 27 ]. The VCFSE sector also suffered from the closure of services, and reduced funding and resources due to increased demand for services and needing to adapt service provision [ 8 ].

These factors hindered collaborative working and created risk for VCFSEs, including putting tension on relationships with local communities resulting from delays implementing services. In most VCFSE-health system partnerships, participants noted that power is generally held by the health system partner, but reputational risk and additional resource-based costs lie with VCFSE partners. Supporting capacity building and workforce resource within the voluntary sector will strengthen this [ 28 ].

Inadequate processes to establish collaborative working enhance distrust between the health system and VCFSE sector, which in turn enhances difficulties for collaborative working. Trust is an important factor in how the system interacts with VCFSEs, with a lack of trust leading to further bottlenecks in VCFSE activities [ 29 ]. Alongside this, is the need for greater health system appreciation for the VCFSE sector, with VSCE partners reporting they faced greater scrutiny and more arduous administrative processes than private sector partners [ 2 , 29 ].

Learning from the pandemic: service prioritisation

All sectors of the health and care system face pressures from resource shortages, internal and external targets [ 30 , 31 ]. This is often linked to drives to increase the value-for-money of services, but key questions remain as to how to assimilate the goals of achieving health equity within value-for-money objectives [ 32 ]. To this end, prioritising value-for-money may come at odds with reducing health inequities. For example, during the rollout of the vaccination programme, additional resources and innovative approaches were required to reach marginalised communities [ 33 , 34 ]. This is supported by emerging evidence from England and internationally that efforts to drive vaccination uptake and reduce inequities in uptake amongst marginalised populations require significant resources and a breadth of approaches to maximise uptake [ 34 ]. Our findings suggest that changes in vaccine uptake were smaller and slower to be realised in these populations, resulting in a “slow burn” in terms of demonstrating quantifiable outcomes. Given the NHS principles of equity [ 10 , 35 ], reaching these groups should remain a public health priority, and failure to prioritise these groups may incur greater long-term financial costs resulting from greater health service needs. Our findings support that challenging entrenched attitudes and frameworks for how success is measured and adapting structures to better incentivise targeted interventions for marginalised or high-risk groups is essential to prioritising addressing unmet needs amongst marginalised communities.

The changing commissioning landscape

The development of ICSs and place-based partnerships has changed how health and care services are commissioned. National guidance encourages health and care leaders to include VCFSE organisations in partnership arrangements and embed them into service delivery [ 12 ], with ‘alliance models’ between ICSs and the VCFSE sector [ 36 ] established in certain regions (see for example [ 37 ]. However, this rests on “a partnership of the willing” [ 37 ] between ICS partners and VCFSE sector players, and concrete guidance for achieving collaborative working in practice, is lacking. As the findings in this paper point to, evolving decision-making processes may add to resource burdens for VCFSE organisations. Traditional health and care partners such as the NHS and local authorities should consider how their ways of working may need to change to foster full VCFSE inclusion on an equal standing, otherwise only the VCFSE stakeholders with sufficient capacity and resource may be able to be meaningfully involved.

Creating a VCFSE-accessible health and care system

In terms of fostering relationships between different sectors, participants acknowledged that pre-pandemic efforts to engage communities and community networks and VCFSEs were insufficient, with more meaningful, well-resourced engagement required going forward. It was also identified by participants the importance of avoiding tokenistic involvement of the VCFSE sector, which may be counter-productive for developing meaningful long-term partnerships. More equal relationships between statutory and VCFSE sectors are needed to foster improved collaborative working [ 5 , 38 ], and this is identified already at the GM level [ 28 ]. Central to this is actioned principles of co-design, including power-sharing, community ownership and trust. In order for co-design strategies to be successful, recognition of the role of the VCFSE sector and their ownership of approaches must be championed within co-design strategies and the enactment of co-designed activities.

Relatedly, greater trust of the VCFSE sector to deliver services effectively and efficiently is needed from health and social care decision-makers to ensure that funding compliance measures and processes are proportionate and not overly burdensome, to avoid funding bottlenecks which in turn impact service delivery [ 2 ]. Currently at the national level, VCFSE applicants typically only become aware of funding through existing networks, leaving less-connected organisations to find out ‘by chance’, thereby limiting reach amongst other organisations [ 2 ]. This may be especially true for smaller or ad-hoc VCFSE networks and groups. Our findings support that bottlenecks to applying for funding should be removed, and more streamlined processes for accessing funding championed [ 2 ].

Our findings also suggest that health systems should engage with the full breadth of the VCFSE sector, creating space for the involvement of smaller scale and less formal organisations as partners. Sharing of best practice and advice for adapting to local contexts should be promoted, alongside evaluation of collaborative models.

Finally, the pandemic period saw unprecedented state-sponsored investment into the VCFSE sector [ 29 ]. Within the GM context, this funding enabled VCFSEs to develop organisational capacity and systems, develop new partnerships, and better respond to the (unmet) needs of local communities [ 39 ]. Currently there are no clear plans to maintain this investment, but sustained inter-sector partnership working will require continued investment in the VCFSE sector.

Strengths & limitations

There are two main limitations to this study. Firstly, whilst the study achieved diversity in its sample, we could not achieve representation across all marginalised communities and therefore could not cover the experiences of all marginalised communities in-depth. As such, whilst the analyses provides valuable insights, such insights may not be transferrable and do not reflect all communities in GM. Secondly, whilst other studies focused on multiple city-regions or areas, our study is limited to the city region of GM. However, this focus provides an in-depth analysis on one region, and, as we discuss in the framing of the paper, we contend that the analysis presented in this paper serves as an exemplar to explore further at the national and international level. It should also be noted that co-design approaches are inevitably time and resource-heavy, and this was challenging in the context of this study, as local stakeholders wanted timely insights to inform the vaccination programme. However, one of the key strengths of our participatory approach was that this enabled a direct connection with the experiences of communities as relevant to the research, in order to shape the research questions, as well as the design and conduct of the study.

Overall, the contribution of the VCFSE sector during the pandemic is clear, with significant support provided in respect to community health and wellbeing and vaccination delivery. Nevertheless, there remains much to learn from the pandemic period, with the potential to harness capacity to tackle inequalities and build trust through shared learning and greater collaborative working. Maintaining an environment in which VCFSE partners are under-recognised, under-valued, and seemingly face further bureaucratic barriers will only exacerbate issues to collaborative working. There are also significant questions around systemic issues and sustainability, which must be addressed to overcome existing barriers to collaborative working between sectors. For instance, our findings identify the importance of flexibility and adaptability, in ongoing and future service delivery. Where this is not pursued this may not only impact service delivery but also create roadblocks to collaboration between sectors, creating divisions between entities whilst ultimately trying to effect change on similar goals (i.e. improved population health). ICS–VCFSE Alliances and community connectors may be a mechanism to promote this, but clear, actionable guidance will be required to translate rhetoric to real-world progress.

Data availability

Data for this research data will not be made publicly available as individual privacy could be compromised. Please contact Stephanie Gillibrand ([email protected]) for further information.

10 GM is an umbrella group which seeks to represent the VCSE sector in GM. More information is available here: https://10gm.org.uk/ .

These themes are explored in a related paper by Gillibrand et al. [ 14 ].

Topic guides are provided as supplementary material.

Distrust was also raised in relation to fear and anxiety in NHS settings, and this is discussed in detail in a related paper from this study by Gillibrand et al. [ 14 ].

Abbreviations

Clinical Commissioning Groups

Community Research Advisory Group

Greater Manchester

Integrated Care Systems

Voluntary, Community and Social Enterprise

Craston MRB. Susan Mackay, Daniel Cameron, Rebecca Writer-Davies, Dylan Spielman. Impact Evaluation of the Coronavirus Community Support Fund. 2021.

NatCen Social Research. Evaluation of VCSE COVID-19 Emergency Funding Package. Department for Digital, Culture, Media & Sport (DCMS); 2022. 27 April 2022.

Marston CRA, Miles S. Community participation is crucial in a pandemic. Lancet. 2020;395(10238):1676–8.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Frost S, Rippon S, Gamsu M, Southby K, Bharadwa M, Chapman J. Space to Connect Keeping in Touch sessions: A summary write up (unpublished). Leeds: Leeds Beckett University; 2021 2021.

Pilkington G, Southby K, Gamsu M, Bagnall AM, Bharadwa M, Chapman J, Freeman C. Through different eyes: How different stakeholders have understood the contribution of the voluntary sector to connecting and supporting people in the pandemic.; 2021.

Dayson CaW A. Capacity through crisis: The role and contribution of the VCSE Sector in Sheffield during the COVID-19 pandemic; 2021.

Timmins B. The COVID-19 vaccination programme: trials, tribulations and successes. The Kings Fund; 2022.

Howarth M, Martin P, Hepburn P, Sheriff G, Witkam R. A Realist evaluation of the state of the Greater Manchester Voluntary, Community and Social Enterprise Sector 2021. GMCVO/University of Salford; 2021.

NHS England. Five Year Forward View. Leeds. 2014 October 2014.

NHS England. The NHS Long Term Plan. NHS England. 2019 January 2019.

Surgey M. With great power: Taking responsibility for integrated care. 2022.

NHS England. Integrating care: next steps to building strong and effective integrated care systems across England. Leeds: NHS England; 2020.

Google Scholar  

RE W. Ethnic inequalities in COVID-19 vaccine uptake and comparison to seasonal influenza vaccine uptake in Greater Manchester, UK: a cohort study. PLoS Med. 2022;19(3).

Gillibrand S, Kapadia D, Watkinson R, Issa B, Kwaku-Odoi C, Sanders C. Marginalisation and distrust in the context of the COVID-19 vaccination programme: experiences of communities in a northern UK city region. BMC Public Health. 2024;24(1):853.

Article   PubMed   PubMed Central   Google Scholar  

Cabinet Office. COVID-19 response: autumn and Winter Plan 2021. Guidance: GOV.UK; 2021.

Department of Health and Social Care. Every adult in UK offered COVID-19 vaccine [press release]. GOV.UK, 19 July 2021 2021.

Irani E. The Use of Videoconferencing for qualitative interviewing: opportunities, challenges, and considerations. Clin Nurs Res. 2019;28(1):3–8.

Article   PubMed   Google Scholar  

Seitz S. Pixilated partnerships, overcoming obstacles in qualitative interviews via Skype: a research note. Qualitative Res. 2016;16(2):229–35.

Article   Google Scholar  

Gale NK. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):1–8.

Castleberry A, Nolen A. Thematic analysis of qualitative research data: is it as easy as it sounds? Curr Pharm Teach Learn. 2018;10(6):807–15.

Braun V, Clarke V. Thematic analysis. APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological2012. pp. 57–71.

Burn S, Propper C, Stoye G, Warner M, Aylin P, Bottle A. What happened to English NHS hospital activity during the COVID-19 pandemic? Brief Note IFS; 2021 13th May 2021.

NHS. COVID-19: Deploying our people safely. 2020 [updated 30th April 2020. https://www.england.nhs.uk/coronavirus/documents/COVID-19-deploying-our-people-safely/ .

Department of Health and Social Care. New TV advert urges public to stay at home to protect the NHS and save lives. [press release]. Department of Health and Social Care, 21st. January 2021 2021.

McCabe A, Wilson M, Macmillian R. Stronger than anyone thought: communities responding to COVID-19. Local Trust. Sheffieldn Hallam University. TSRC.; 2020.

McCabe A, Afridi A, Langdale E. Community responses to COVID-19: connecting communities? How relationships have mattered in community responses to COVID-19 Local Trust. TSRC, Sheffield Hallam University; 2022. January 2022.

Carpenter J. Exploring lessons from Covid-19 for the role of the voluntary sector in Integrated Care Systems. July 2021. Oxford Brookes University; 2021.

Greater Manchester Combined Authority. GM VCSE Accord Agreement. 2021 [ https://www.greatermanchester-ca.gov.uk/media/5207/gm-vcse-accord-2021-2026-final-signed-october-2021-for-publication.pdf .

Department for Digital, Culture, Media & Sport. Financial support for voluntary, community and social enterprise (VCSE) organisations to respond to coronavirus (COVID-19).: Department for Digital, Culture, Media & Sport and Office for Civil Society. 2020 [updated 20th may 2020. https://www.gov.uk/guidance/financial-support-for-voluntary-community-and-social-enterprise-vcse-organisations-to-respond-to-coronavirus-COVID-19 .

Smee C. Improving value for money in the United Kingdom National Health Service: performance measurement and improvement in a centralised system. Measuring Up: Improving Health Systems Performance in OECD Countries; 2002.

McCann L, Granter E, Hassard J, Hyde P. You can’t do both—something will give: limitations of the targets culture in managing UK health care workforces. Hum Resour Manag. 2015;54(5):773–91.

Smith P. Measuring value for money in healthcare: concepts and tools. London: Centre for Health Economics, University of York. The Health Foundation; 2009 September 2009.

Ekezie W, Awwad S, Krauchenberg A, Karara N, Dembiński Ł, Grossman Z, et al. Access to Vaccination among Disadvantaged, isolated and difficult-to-Reach communities in the WHO European Region: a systematic review. Vaccines. 2022;10(7):1038.

British Academy. Vaccine equity in Multicultural Urban settings: a comparative analysis of local government and community action, contextualised political economies and moral frameworks in Marseille and London. London: The British Academy; 2022.

England NHS. Core20PLUS5 (adults)– an approach to reducing healthcare inequalities 2023. https://www.england.nhs.uk/about/equality/equality-hub/national-healthcare-inequalities-improvement-programme/core20plus5/ .

NHS England. Building strong integrated care systems everywhere 2021. Available from here: https://www.england.nhs.uk/wp-content/uploads/2021/06/B0664-ics-clinical-and-care-professional-leadership.pdf .

Anfilogoff T, Marovitch J. Who Creates Health in Herts and West Essex? Presentation to NHS Confederation Seminar: Who Creates Health? 8 November 2022. 2022.

Bergen JWS. Pandemic pressures: how Greater Manchester equalities organisations have responded to the needs of older people during the covid-19 crisis. GMCVO; 2021.

Graham M. Learning from Covid-19 pandemic grant programmes lessons for funders and support agencies. May 2022. GMCVO; 2022.

Download references

Acknowledgements

The research team would like to thank ARC-GM PCIE team (Sue Wood, Aneela McAvoy, & Joanna Ferguson) and the Caribbean and African Health Network for their support in this study. We would also like to thank the Advisory Group members: Nasrine Akhtar, Basma Issa and Charles Kwaku-Odoi for their dedicated time, commitment, and valuable inputs into this research project and to partners who contributed to the early inception of this work, including members of the ARC-GM PCIE Panel & Forum & Nick Filer. We would also like to extend our thanks to the study participants for their participation in this research.

The project was funded by an internal University of Manchester grant and supported by the National Institute for Health and Care (NIHR) Applied Research Collaboration for Greater Manchester. Melissa Surgey’s doctoral fellowship is funded by the Applied Research Collaboration for Greater Manchester. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.

Author information

Authors and affiliations.

Centre for Primary Care and Health Services Research, University of Manchester, Greater Manchester, England, UK

Stephanie Gillibrand

NIHR Applied Research Collaboration for Greater Manchester, Greater Manchester, England, UK

Ruth Watkinson, Melissa Surgey & Caroline Sanders

Independent (public contributor), Greater Manchester, England, UK

Greater Manchester Patient Safety Research Centre, University of Manchester, Greater Manchester, England, UK

Caroline Sanders

You can also search for this author in PubMed   Google Scholar

Contributions

SG, lead writer/editor, design of the work, RW, design of the work, drafting of article, review and revise suggestionsMS, draft of the article, review and revise suggestionsBI, design of the work, review and revise suggestionsCS, design of the work, draft of the article, review and revise suggestionsAll authors read and approved the final manuscript.

Corresponding author

Correspondence to Stephanie Gillibrand .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by University of Manchester Ethics Committee (Proportionate UREC) 24/06/21. Ref 2021-11646-19665. Informed consent to participate in the research was taken from all research participants ahead of their participation in the study. Consent to participate in the study was taken from each participant by a member of the research team. All experiments were performed in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, supplementary material 5, supplementary material 6, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Gillibrand, S., Watkinson, R., Surgey, M. et al. “ We might not have been in hospital, but we were frontline workers in the community ”: a qualitative study exploring unmet need and local community-based responses for marginalised groups in Greater Manchester during the COVID-19 pandemic. BMC Health Serv Res 24 , 621 (2024). https://doi.org/10.1186/s12913-024-10921-4

Download citation

Received : 10 November 2023

Accepted : 28 March 2024

Published : 13 May 2024

DOI : https://doi.org/10.1186/s12913-024-10921-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Marginalised groups

BMC Health Services Research

ISSN: 1472-6963

the number of participants in qualitative research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.2(2); 2021 Jun

Logo of atssch

Zooming into Focus Groups: Strategies for Qualitative Research in the Era of Social Distancing

Lekshmi santhosh.

1 Department of Medicine, University of California-San Francisco, San Francisco, California

Juan C. Rojas

2 Department of Medicine, University of Chicago, Chicago, Illinois

Patrick G. Lyons

3 Department of Medicine, School of Medicine, Washington University, St. Louis, Missouri; and

4 Healthcare Innovation Laboratory, BJC HealthCare, St. Louis, Missouri

Associated Data

Qualitative research methods are important and have become increasingly prominent in medical education and research. The reason is simple: many pressing questions in these fields require qualitative approaches to elicit nuanced insights and additional meaning beyond standard quantitative measurements in surveys or observatons. Among the most common qualitative data collection methods are structured or semistructured in-person interviews and focus groups, in which participants describe their experiences relevant to the research question at hand. In the era of physical and social distancing because of the novel coronavirus disease (COVID-19) pandemic, little guidance exists for strategies for conducting focus groups or semistructured interviews. Here we describe our experience with, and recommendations for, conducting remote focus groups and/or interviews in the era of social distancing. Specifically, we discuss best practice recommendations for researchers using video teleconferencing programs to continue qualitative research during the COVID-19 pandemic.

Qualitative research focuses on exploring individuals’ perspectives related to specific research questions, issues, or activities ( 1 ). Frequently, structured interviews or focus groups are tools employed for data collection for qualitative research. In-person interviews are ideal, although phone and digital alternatives may be considered ( 2 , 3 ). However, little guidance exists for strategies for conducting focus groups or semistructured interviews in the era of physical and social distancing with the coronavirus disease (COVID-19) pandemic. In this article, we describe some strategies for conducting focus groups or structured interviews with the use of video conferencing platforms ( Figure 1 ). Video conferencing may provide researchers and research participants with a convenient and safe alternative to in-person qualitative research, albeit with some important limitations and considerations.

An external file that holds a picture, illustration, etc.
Object name is ats-scholar.2020-0127PSf1.jpg

Key strategies to ensure successful remote focus groups and interviews. IRB = institutional review board.

Throughout 2019, we collaborated on a series of stakeholder focus groups to explore clinician experiences with patient handoffs between the intensive care unit and the wards. These focus groups, conducted in-person at our respective academic medical centers, helped us delineate key strengths and “pain points” of our handoff processes and identify facilitators and barriers to the user-centered design and implementation of a new process ( 4 ). We had scheduled subsequent in-person focus groups for this iterative design and testing process to take place in Spring 2020. However, we were forced to recalibrate our plans based on the rapidly changing COVID-19 situation and the situations of our intended participants (internal medicine residents). This article provides some practical guidance and reflections based on our experiences conducting semistructured focus groups using a videoconference platform with internal medicine residents at three academic medical centers. We outline our recommendations by describing the process of these remote focus groups, from planning and recruitment to the execution and technical troubleshooting of the videoconference.

Setting the Stage

More than ever, healthcare professionals are overtaxed because of increased clinical responsibilities; new or altered clinical environments and workflows; and increased burdens of administrative, educational, and investigatory work conducted by phone, e-mail, and video conference ( 5 – 7 ). Because of the school and childcare facility closures, many healthcare professionals may be engaged in nonclinical work while simultaneously caring for their children or supervising remote learning ( 8 ). With this in mind, we recommend that researchers carefully consider the timing of planned focus groups or interviews to maximize participation and minimize the strain on potential participants. Whenever possible, researchers should seek input on optimal timing and duration from potential participants.

The flexibility of video conferencing may potentially allow researchers to recruit participants by eliminating transportation and transit time barriers and allows for increased flexibility to consider scheduling focus groups or interviews at nontraditional times to accommodate the participants’ schedules.

Overall, we recommend that focus groups are conducted over video rather than audio if unable to be done in-person. Audio-only experiences are inherently more challenging than remote video sessions; it is difficult to tell when participants are speaking but muted, to identify an individual speaker among many participants, and to interpret tone and body language. In addition, audio-only encounters often limit crosstalk, which can enhance the depth of responses. We acknowledge that video is less private than audio, but it may be more private than in-person (e.g., a participant may decline to enroll in an in-person interview or group around a sensitive topic if they do not wish to be seen physically entering or exiting a known research room). Consent must specify whether audio alone is being recorded, or whether video and audio are both recorded.

Most importantly, before recruitment and consent, researchers should identify which video teleconferencing platform (e.g., Zoom, Google Meet, or Microsoft Teams) is best suited for the project ( Table 1 ); because these platforms share many of the same capabilities (e.g., screencasting/sharing and audio recording), this decision may be based on institutional adoption or availability.

Overview of several common videoconferencing platforms

Definition of abbreviation : HIPAA = Health Insurance Portability and Accountability Act.

Recruitment and Consent

Although some local institutional review board (IRB) procedures may have changed in response to COVID-19, qualitative research projects with human participants still require IRB review for determination of exempt status or formal approval. Researchers should obtain IRB approval to record the audio from the focus group or structured interview if a recording is desired.

Recruitment is likely to be predominantly virtual, in the form of e-mail “blasts” describing the study and providing the information needed for informed consent. After completing recruitment and selecting a video conferencing platform for the proposed research, we recommend providing attendees a password-protected electronic invitation to ensure the privacy of the session. In addition, it is helpful when this invitation includes an attached electronic calendar “event,” which can allow potential participants to quickly cross reference their electronic calendars, which are increasingly full of virtual meetings. Gray and colleagues found that participants wanted to synchronize these invitations with their electronic calendars and preferred the interview be limited to 1 hour at most, to avoid fatigue and schedule disruption ( 9 ). Zoom and other similar platforms offer a straightforward option for participants to add the session to their personal electronic calendars automatically. We recommend this method of invitation to increase convenience for participants who are increasingly accustomed to daily schedules of virtual meetings.

As with in-person focus groups, there is likely to be a “U-shaped” relationship between the number of participants and the volume and depth of insights gained within a session; too few participants may prevent dialogue and limit progress toward thematic saturation or uncovering new insights, whereas too many participants will preclude opportunities for deeper follow-up and will limit the amount of time that any single participant may contribute. Most commonly available videoconference platforms permit audience sizes of 50 or more, which far exceeds the number of participants a typical focus group would contain.

Presession Technical Preparation

It is crucial that researchers familiarize themselves with the interface and options of their chosen videoconference platform, both to maximize the effectiveness of their session facilitation and to improve their ability to solve common technical difficulties that may arise. This preparation should take place on the computing device that the researcher intends to use for research sessions to ensure that video, audio volume, and internet speed are adequate to host a successful video conference meeting. We recommend recording a practice session to become familiar with recording logistics and file storage locations, and to ensure the device’s microphone records clearly enough for participants’ hearing and transcription. Beyond the opportunity to troubleshoot the virtual platform, this practice session may also serve the second purpose of familiarizing the facilitator with the discussion questions.

Of note, researchers should evaluate the adequacy of their devices’ storage capabilities, given the large file sizes required to record audio and video. Many universities provide network storage solutions to members of their academic community, which may help facilitate storing large files. Importantly, if the research participants are patients, any recorded data (i.e., audio, video, and transcripts) are considered protected health information. These data require additional privacy considerations, especially around storage and electronic transfer. Because commercial video chat platforms may host or store files on their servers, the research team should ensure, ahead of time, that any commercial video chat platform used for research meets both the Health Insurance Portability and Accountability Act and institutional standards for secure data storage.

After successful completion of the trial run as a host, we recommend contacting the study participants before the session to ensure that any technical questions or concerns are addressed.

Introducing the Session

Initializing a virtual meeting is, in many ways, similar to initializing an in-person meeting. Like physical meetings, attendees may “trickle in” late because of preceding scheduled events or technical difficulties. We recommend allowing 1–5 minutes at the session’s beginning to account for late arrivals and to address technical issues if any are apparent. Once individuals are in the meeting, the facilitator can “lock” the session so uninvited attendees do not “Zoom-bomb.” In addition, researchers can further protect their meeting by using a Waiting Room, if available. Videoconference waiting rooms are virtual staging areas, which prevents attendees from joining a meeting until permitted, either individually or in a group, to enter. The facilitator should introduce the focus group or structured interviews just as they would an in-person session, including assurances regarding confidentiality, an overview of the session’s objectives, and an explicit statement of the session’s ground rules. The facilitator should obtain permission to record the focus group or structured interview and provide attendees the opportunity to leave the meeting if they do not consent to the recording. Finally, we recommend that researchers consider using a visual cue on a shared slide to remind them to initiate recording before beginning the session’s questions. Ideally, having two individuals record the meeting helps ensure redundancy so that if one individual has recording issues, the copy is preserved.

Depending on the size of the focus group or structured interview, the facilitator may wish to describe, at the meeting’s beginning, how attendee opinions will be solicited. For example, focus group participants can “unmute” themselves to speak or use the “raise hand” function on the meeting service. We recommend discouraging the use of the “chat” function because chat box contents are not recorded unless explicitly read aloud. If attendees do type in the chat box during the session, we recommend that the facilitator read the chat box contents aloud to capture these insights in the recording and transcript. Last, consider asking attendees to share their video feeds so participants and leaders can view attendee facial expressions and identify visual cues when individuals are about to speak (or are speaking, but are inadvertently muted). However, we recognize that this recommendation could limit participation by attendees without video-capable devices and/or put undue stress or burden on attendees who may be simultaneously parenting or multitasking. Above all, researchers should encourage attendees to make choices that will maximize their comfort with the session, and thus, maximize their contributions to the discussion.

During the Session

In general, remote qualitative inquiry sessions should follow a structure similar to that of face-to-face sessions. The facilitator should use effective moderation techniques online just as they would in-person. We have found that having an additional research team member serve as a scribe and timekeeper is helpful, if available. This teammate could also serve as a backup host if the primary host has unresolvable technical issues. Facilitators guiding semistructured interviews should ask follow-up probing questions and avoid sharing their own opinion, asking closed or leading questions, and other missteps that contribute to bias.

Within these general guidelines, however, the research team should be cognizant of the ways in which remote interactions differ from a live discussion. For instance, participants may be either more (e.g., because of additional perceived anonymity) or less (e.g., because of multitasking) likely to interact on videoconference, which may require proactive facilitation (e.g., direction questions or probes to individual participants). Similarly, a proactive facilitator may wish to be particularly attentive for openings to ask probing or follow-up questions, as some data suggest that online qualitative inquiry provides less opportunity for probing and follow-up ( 10 ). Furthermore, microphone technology is likely to preclude the degree of crosstalk seen in many face-to-face focus groups, which could limit the depth and quality of dialogue elicited. This lack of crosstalk may inhibit the ability to develop social norms, which are often a key factor distinguishing focus groups from individual structured interviews. It is not known whether facilitator behaviors or factors like focus group size can modify these limitations, although certain characteristics of focus group questions (e.g., open-ended) appear to yield richer discussion and data ( 10 ). Finally, if an audio-only focus group is the only option, we suggest using a visual model (e.g., a map or list of participants) to remind the facilitator of focus group participants, so notes can be transcribed visually under each participant.

Researchers should consider the need to maintain the privacy and potential anonymity of all participants, as outlined in the project’s IRB protocol. This consideration should also include any potential protected health information if the participants are patients. If strict anonymity is required, avoid stating participants’ names during the recording. If deidentification during transcription or review is appropriate, using the names of participants may increase the connection between the facilitator and the respondent, allowing for greater psychological safety.

After the Session

Concluding a virtual interview or focus group is similar to concluding an in-person session of the same type. The researchers should thank participants for their time, particularly given the stressors of the pandemic. In addition, we recommend discussing criteria for possible follow-up discussions. After ending the recording, ensure the file is saved to a secure location. Use professional transcription software to transcribe the audio recording from the focus group. Analyze the data with the qualitative framework outlined in the study design stage.

Because qualitative analysis of remote interviews and focus groups is typically conducted on transcribed audio, the decision to use a video platform often has little impact on data analysis. However, in some situations, the recorded video may prove advantageous. For example, the inclusion of video might facilitate the differentiation of speakers or clarification of unclear words during transcription or transcript reviews. Similarly, video might provide context around pauses, hand gestures, or facial expressions. Whether remote sessions have the same Hawthorne-esque effect on participants (i.e., do they behave in a particular way because of their awareness of being observed) is unknown. For instance, it is possible that participants behave differently when observed on video as compared with an audio-only (e.g., telephone) experience, or as compared with an in person session. One implication of this possibility could involve the perceived acceptability of multitasking or split attention; not infrequently, video participants elect not to share their individual video feeds.

Common Pitfalls and Strategies for Success

Qualitative interviews and focus groups, regardless of the setting, are subject to certain pitfalls along with a project’s progression from research question to analysis and dissemination. For instance, suboptimal recruitment practices (e.g., lack of advertisement) may limit enrollment, whereas incomplete or rushed interview scripts may not elicit complete or nuanced insights from participants. For remote interviews or focus groups, distance and technology may present additional obstacles (or interact with known risks), which can threaten a project’s success ( Table 2 ). Overall, the virtual qualitative experience offers a tradeoff between participant availability and an increased number of potential distractions. Whether these potential threats to qualitative insight are worth access to participants who might be unable to attend face-to-face sessions is likely to vary across research questions and teams of investigators. In general, these pitfalls can be avoided or mitigated with careful preplanning, practice sessions, and deliberate attention to areas of risk.

Potential remote focus group pitfalls and related strategies for success

Definition of abbreviation s: IRB = institutional review board; HIPAA = Health Insurance Portability and Accountability Act.

Conclusions

We hope that these practical tips can help with conducting rigorous qualitative inquiry through remote focus groups or structured interviews in the era of physical and social distancing.

Supplementary Material

Supported by an APCCMPD, CHEST, and ATS Education Research Award (L.S.).

Author Contributions : Conception and design: P.G.L. Drafting of the article: L.S., J.C.R., and P.G.L. Critical revision of the article for important intellectual content: L.S., J.C.R., and P.G.L. Final approval of the article: L.S., J.C.R., and P.G.L. Administrative, technical, or logistic support: P.G.L.

Author disclosures are available with the text of this article at www.atsjournals.org .

COMMENTS

  1. Big enough? Sampling in qualitative inquiry

    Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...

  2. Qualitative Research Part II: Participants, Analysis, and Quality

    Qualitative Research Part II: Participants, Analysis, and Quality Assurance. This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common ...

  3. Sample sizes for saturation in qualitative research: A systematic

    Qualitative samples that are larger than needed raise ethical issues, such as wasting research funds, overburdening study participants, and leading to wasted data (Carlsen and Glenton, 2011; Francis et al., 2010), while samples that are too small to reach saturation reduce the validity of study findings (Hennink et al., 2017). Our results thus ...

  4. Sample size: how many participants do I need in my research?

    It is the ability of the test to detect a difference in the sample, when it exists in the target population. Calculated as 1-Beta. The greater the power, the larger the required sample size will be. A value between 80%-90% is usually used. Relationship between non-exposed/exposed groups in the sample.

  5. PDF Determining the Sample in Qualitative Research

    straightforward guidelines for determining the number of participants in qualitative studies (Patton, 2015), rather several factors affect in deciding the samples. For instance, in her ... Determining the participants in qualitative research is problematic since various scholars have conceived it in their way. Deciding the participants remain ...

  6. Sample Size Policy for Qualitative Studies Using In-Depth Interviews

    The policy of the Archives of Sexual Behavior will be that it adheres to the recommendation that 25-30 participants is the minimum sample size required to reach saturation and redundancy in grounded theory studies that use in-depth interviews. This number is considered adequate for publications in journals because it (1) may allow for ...

  7. Qualitative Research: Getting Started

    Qualitative research was historically employed in fields such as sociology, history, ... The number of participants is therefore dependent on the richness of the data, though Miles and Huberman 2 suggested that more than 15 cases can make analysis complicated and "unwieldy".

  8. (PDF) How many participants are necessary for a qualitative study

    Abstract. One of the difficulties associated with qualitative research refers to sample size. Researchers often fail to present a justification for their N and are criticized for that. This ...

  9. How many participants do I need for qualitative research?

    The answer lies somewhere in between. It's often a good idea (for qualitative research methods like interviews and usability tests) to start with 5 participants and then scale up by a further 5 based on how complicated the subject matter is. You may also find it helpful to add additional participants if you're new to user research or you ...

  10. How Many Focus Groups Are Enough? Building an Evidence Base for

    Few empirical studies exist to guide researchers in determining the number of focus groups necessary for a research study. ... Doing funded qualitative research. In Handbook for Qualitative Research, eds. Denzin N. K., Lincoln Y. S., 401-20. Thousand Oaks, CA: Sage. ... Sampling and selecting participants in field research. In Handbook of ...

  11. Qualitative Study

    Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a standalone study, purely relying on qualitative data, or part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers ...

  12. Reporting and Justifying the Number of Interview Participants in

    For such qualitative research there is a paucity of discussion across the social sciences, the topic receiving far less attention than its centrality warrants. We analysed 798 articles published in 2003 and 2013 in ten top and second tier academic journals, identifying 248 studies using at least one type of qualitative interview.

  13. How to use and assess qualitative research methods

    In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. "members", see above) but as consultants to and active participants in the broader research process [31-33].

  14. Real qualitative researchers do not count: The use of numbers in

    to qualitative research, as meaning depends, in part, on number. As in quantitative research, numbers are used in qualitative research to establish ... participants, interviewed only once, can yield 250 pages of raw data alone. Qualitative re-searchers can take advantage of the rhetorical

  15. Interviews and focus groups in qualitative research: an update for the

    Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10,11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing ...

  16. How many participants do I need in my qualitative research?

    It is all about reaching the point of saturation or the point where you are already getting repetitive responses (You may want to check Egon and Guba, 1985). Over time some researchers say that ...

  17. Why 5 Participants Are Okay in a Qualitative Study, but Not in a

    Based on these assumptions, Jakob Nielsen and Tom Landauer built a mathematical model that shows that, by doing a qualitative test with 5 participants, you will identify 85% of the issues in an interface. And Jakob Nielsen has repeatedly argued (and justly so) that a good investment is to start with 5 people, find your 85% of the issues, fix ...

  18. How many interviews are needed in a qualitative research?

    However, the number of participants depends on the qualitative research approach. According to Creswell, W. & Creswell, D. (2018), Narrative includes 1-2, phenomenology includes 3-10, the grounded ...

  19. Qualitative Data Coding

    The ultimate goal is to develop a coherent and meaningful coding scheme that captures the richness and complexity of the participants' experiences and helps answer the research questions. Step 1: Familiarize yourself with the data. Read through your data (interview transcripts, field notes, documents, etc.) several times.

  20. Qualitative Research: Definition, Methodology, Limitation, Examples

    Focus groups gather a small number of people to discuss and provide feedback on a particular subject. ... Because qualitative research is open-ended, participants have more control over the content of the data collected. So the marketer is not able to verify the results objectively against the scenarios stated by the respondents ...

  21. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences ...

  22. Identifying primary care clinicians' preferences for, barriers to, and

    The study was approved by the institutional ethics committee (NHG DSRB Reference Number: 2018/01355). All participants read the study information sheet before providing written consent. This study followed the Consolidated Criteria for Reporting Qualitative Research guidelines [see Additional file 1]. Participants and recruitment

  23. A qualitative analysis of post-hoc interviews with multilevel ...

    Methods: We undertook a qualitative assessment of semi-structured interviews with study participants performed after trial completion. Interviews included a total of 31 participants including 20 (6%) of the 319 LABBPS program participants ("patrons"), 10 (19%) barbers, and one (50%) clinical pharmacist.

  24. "We might not have been in hospital, but we were frontline workers in

    A number of challenges associated with collaborative working were experienced by the VSCE sector and health system in delivering the vaccination programme in partnership with the VCFSE sector. ... we present findings from a co-designed qualitative research project, drawing on insights from 35 participants, including members of diverse ...

  25. Characterising and justifying sample size sufficiency in interview

    Qualitative research provides a unique opportunity to understand a clinical problem from the patient's perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. ... they found that this larger number of participants ...

  26. Quantitative vs. Qualitative User Research

    Quantitative research is centered around numbers, focusing on 'what' and 'how many.'. This method helps us understand how well a website is doing in turning visitors into customers. By utilizing methods like surveys, polls, and experiments, quantitative research helps you gather data and analyze it to identify patterns and trends.

  27. Zooming into Focus Groups: Strategies for Qualitative Research in the

    Qualitative research focuses on exploring individuals' perspectives related to specific research questions, issues, ... As with in-person focus groups, there is likely to be a "U-shaped" relationship between the number of participants and the volume and depth of insights gained within a session; too few participants may prevent dialogue ...