This website may not work correctly because your browser is out of date. Please update your browser .

A case study focuses on a particular unit - a person, a site, a project. It often uses a combination of quantitative and qualitative data.

Case studies can be particularly useful for understanding how different elements fit together and how different elements (implementation, context and other factors) have produced the observed impacts.

There are different types of case studies, which can be used for different purposes in evaluation. The GAO (Government Accountability Office) has described six different types of case study:

1.  Illustrative : This is descriptive in character and intended to add realism and in-depth examples to other information about a program or policy. (These are often used to complement quantitative data by providing examples of the overall findings).

2.  Exploratory : This is also descriptive but is aimed at generating hypotheses for later investigation rather than simply providing illustration.

3.  Critical instance : This examines a single instance of unique interest, or serves as a critical test of an assertion about a program, problem or strategy.

4.  Program implementation . This  investigates operations, often at several sites, and often with reference to a set of norms or standards about implementation processes.

5.  Program effects . This examines the causal links between the program and observed effects (outputs, outcomes or impacts, depending on the timing of the evaluation) and usually involves multisite, multimethod evaluations.

6.  Cumulative . This brings together findings from many case studies to answer evaluative questions. 

The following guides are particularly recommended because they distinguish between the research design (case study) and the type of data (qualitative or quantitative), and provide guidance on selecting cases, addressing causal inference, and generalizing from cases.

This guide from the US General Accounting Office outlines good practice in case study evaluation and establishes a set of principles for applying case studies to evaluations.

This paper, authored by Edith D. Balbach for the California Department of Health Services is designed to help evaluators decide whether to use a case study evaluation approach.

This guide, written by Linda G. Morra and Amy C. Friedlander for the World Bank, provides guidance and advice on the use of case studies.

Expand to view all resources related to 'Case study'

  • Broadening the range of designs and methods for impact evaluations
  • Case studies in action
  • Case study evaluations - US General Accounting Office
  • Case study evaluations - World Bank
  • Comparative case studies
  • Dealing with paradox – Stories and lessons from the first three years of consortium-building
  • Designing and facilitating creative conversations & learning activities
  • Estudo de caso: a avaliação externa de um programa
  • Evaluation tools
  • Evaluations that make a difference
  • Methods for monitoring and evaluation
  • Reflections on innovation, assessment and social change processes: A SPARC case study, India
  • Toward a listening bank: A review of best practices and the efficacy of beneficiary assessment
  • UNICEF webinar: Comparative case studies
  • Using case studies to do program evaluation

'Case study' is referenced in:

  • Week 32: Better use of case studies in evaluation

Back to top

© 2022 BetterEvaluation. All right reserved.

  • Open access
  • Published: 27 November 2020

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

  • Aileen Grant 1 ,
  • Carol Bugge 2 &
  • Mary Wells 3  

Trials volume  21 , Article number:  982 ( 2020 ) Cite this article

12k Accesses

12 Citations

5 Altmetric

Metrics details

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration

DQIP - ClinicalTrials.gov number, NCT01425502 - OPAL - ISRCTN57746448

Peer Review reports

Contribution to the literature

We illustrate how case study methodology can explore the complex, dynamic and uncertain relationship between context and interventions within trials.

We depict different case study designs and illustrate there is not one formula and that design needs to be tailored to the context and trial design.

Case study can support comparisons between intervention and control arms and between cases within arms to uncover and explain differences in detail.

We argue that case study can illustrate how components have evolved and been redefined through implementation.

Key issues for consideration in case study design within process evaluations are presented and illustrated with examples.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail and whether they can be transferred to other settings and populations. However, historically, not all trials have had a process evaluation component, nor have they sufficiently reported aspects of context, resulting in poor uptake of trial findings [ 1 ]. Considerations of context are often absent from published process evaluations, with few studies acknowledging, taking account of or describing context during implementation, or assessing the impact of context on implementation [ 2 , 3 ]. At present, evidence from trials is not being used in a timely manner [ 4 , 5 ], and this can negatively impact on patient benefit and experience [ 6 ]. It takes on average 17 years for knowledge from research to be implemented into practice [ 7 ]. Suitable methodologies are therefore needed that allow for context to be exposed; one appropriate methodological approach is case study [ 8 , 9 ].

In 2015, the Medical Research Council (MRC) published guidance for process evaluations [ 10 ]. This was a key milestone in legitimising as well as providing tools, methods and a framework for conducting process evaluations. Nevertheless, as with all guidance, there is a need for reflection, challenge and refinement. There have been a number of critiques of the MRC guidance, including that interventions should be considered as events in systems [ 11 , 12 , 13 , 14 ]; a need for better use, critique and development of theories [ 15 , 16 , 17 ]; and a need for more guidance on integrating qualitative and quantitative data [ 18 , 19 ]. Although the MRC process evaluation guidance does consider appropriate qualitative and quantitative methods, it does not mention case study design and what it can offer the study of context in trials.

The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [ 8 ]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [ 8 ]. There are a number of textbooks on a case study within the social science fields [ 8 , 9 , 20 ], but there are no case study textbooks and a paucity of useful texts on how to design, conduct and report case study within the health arena. Few examples exist within the trial design and evaluation literature [ 3 , 21 ]. Therefore, guidance to enable well-designed process evaluations using case study methodology is required.

We aim to address the gap in the literature by presenting a number of important considerations for process evaluation using a case study design. First, we define the context and describe the relationship between complex health interventions and context.

What is context?

While there is growing recognition that context interacts with the intervention to impact on the intervention’s effectiveness [ 22 ], context is still poorly defined and conceptualised. There are a number of different definitions in the literature, but as Bate et al. explained ‘almost universally, we find context to be an overworked word in everyday dialogue but a massively understudied and misunderstood concept’ [ 23 ]. Ovretveit defines context as ‘everything the intervention is not’ [ 24 ]. This last definition is used by the MRC framework for process evaluations [ 25 ]; however; the problem with this definition is that it is highly dependent on how the intervention is defined. We have found Pfadenhauer et al.’s definition useful:

Context is conceptualised as a set of characteristics and circumstances that consist of active and unique factors that surround the implementation. As such it is not a backdrop for implementation but interacts, influences, modifies and facilitates or constrains the intervention and its implementation. Context is usually considered in relation to an intervention or object, with which it actively interacts. A boundary between the concepts of context and setting is discernible: setting refers to the physical, specific location in which the intervention is put into practice. Context is much more versatile, embracing not only the setting but also roles, interactions and relationships [ 22 ].

Traditionally, context has been conceptualised in terms of barriers and facilitators, but what is a barrier in one context may be a facilitator in another, so it is the relationship and dynamics between the intervention and context which are the most important [ 26 ]. There is a need for empirical research to really understand how different contextual factors relate to each other and to the intervention. At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [ 27 ]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.

The relationship between complex health interventions and context

Health interventions are generally made up of a number of different components and are considered complex due to the influence of context on their implementation and outcomes [ 3 , 28 ]. Complex interventions are often reliant on the engagement of practitioners and patients, so their attitudes, behaviours, beliefs and cultures influence whether and how an intervention is effective or not. Interventions are context-sensitive; they interact with the environment in which they are implemented. In fact, many argue that interventions are a product of their context, and indeed, outcomes are likely to be a product of the intervention and its context [ 3 , 29 ]. Within a trial, there is also the influence of the research context too—so the observed outcome could be due to the intervention alone, elements of the context within which the intervention is being delivered, elements of the research process or a combination of all three. Therefore, it can be difficult and unhelpful to separate the intervention from the context within which it was evaluated because the intervention and context are likely to have evolved together over time. As a result, the same intervention can look and behave differently in different contexts, so it is important this is known, understood and reported [ 3 ]. Finally, the intervention context is dynamic; the people, organisations and systems change over time, [ 3 ] which requires practitioners and patients to respond, and they may do this by adapting the intervention or contextual factors. So, to enable researchers to replicate successful interventions, or to explain why the intervention was not successful, it is not enough to describe the components of the intervention, they need to be described by their relationship to their context and resources [ 3 , 28 ].

What is a case study?

Case study methodology aims to provide an in-depth, holistic, balanced, detailed and complete picture of complex contemporary phenomena in its natural context [ 8 , 9 , 20 ]. In this case, the phenomena are the implementation of complex interventions in a trial. Case study methodology takes the view that the phenomena can be more than the sum of their parts and have to be understood as a whole [ 30 ]. It is differentiated from a clinical case study by its analytical focus [ 20 ].

The methodology is particularly useful when linked to trials because some of the features of the design naturally fill the gaps in knowledge generated by trials. Given the methodological focus on understanding phenomena in the round, case study methodology is typified by the use of multiple sources of data, which are more commonly qualitatively guided [ 31 ]. The case study methodology is not epistemologically specific, like realist evaluation, and can be used with different epistemologies [ 32 ], and with different theories, such as Normalisation Process Theory (which explores how staff work together to implement a new intervention) or the Consolidated Framework for Implementation Research (which provides a menu of constructs associated with effective implementation) [ 33 , 34 , 35 ]. Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented [ 36 ]. Case study enables researchers to choose epistemologies and theories which suit the nature of the enquiry and their theoretical preferences.

Designing a process evaluation using case study

An important part of any study is the research design. Due to their varied philosophical positions, the seminal authors in the field of case study have different epistemic views as to how a case study should be conducted [ 8 , 9 ]. Stake takes an interpretative approach (interested in how people make sense of their world), and Yin has more positivistic leanings, arguing for objectivity, validity and generalisability [ 8 , 9 ].

Regardless of the philosophical background, a well-designed process evaluation using case study should consider the following core components: the purpose; the definition of the intervention, the trial design, the case, and the theories or logic models underpinning the intervention; the sampling approach; and the conceptual or theoretical framework [ 8 , 9 , 20 , 31 , 33 ]. We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [ 21 , 37 , 38 , 39 , 40 , 41 ].

The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome. It can help inform judgements about validity (by exploring the intervention components and their relationship with one another (construct validity), the connections between intervention and outcomes (internal validity) and the relationship between intervention and context (external validity)). It can also distinguish between implementation failure (where the intervention is poorly delivered) and intervention failure (intervention design is flawed) [ 42 , 43 ]. By using a case study to explicitly understand the relationship between context and the intervention during implementation, the process evaluation can explain the intervention effects and the potential generalisability and optimisation into routine practice [ 44 ].

The DQIP process evaluation aimed to qualitatively explore how patients and GP practices responded to an intervention designed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or antiplatelet agents (see Table  1 ) and quantitatively examine how change in high-risk prescribing was associated with practice characteristics and implementation processes. The OPAL process evaluation (see Table  2 ) aimed to quantitatively understand the factors which influenced the effectiveness of a pelvic floor muscle training intervention for women with urinary incontinence and qualitatively explore the participants’ experiences of treatment and adherence.

Defining the intervention and exploring the theories or assumptions underpinning the intervention design

Process evaluations should also explore the utility of the theories or assumptions underpinning intervention design [ 49 ]. Not all theories underpinning interventions are based on a formal theory, but they based on assumptions as to how the intervention is expected to work. These can be depicted as a logic model or theory of change [ 25 ]. To capture how the intervention and context evolve requires the intervention and its expected mechanisms to be clearly defined at the outset [ 50 ]. Hawe and colleagues recommend defining interventions by function (what processes make the intervention work) rather than form (what is delivered) [ 51 ]. However, in some cases, it may be useful to know if some of the components are redundant in certain contexts or if there is a synergistic effect between all the intervention components.

The DQIP trial delivered two interventions, one intervention was delivered to professionals with high fidelity and then professionals delivered the other intervention to patients by form rather than function allowing adaptations to the local context as appropriate. The assumptions underpinning intervention delivery were prespecified in a logic model published in the process evaluation protocol [ 52 ].

Case study is well placed to challenge or reinforce the theoretical assumptions or redefine these based on the relationship between the intervention and context. Yin advocates the use of theoretical propositions; these direct attention to specific aspects of the study for investigation [ 8 ] can be based on the underlying assumptions and tested during the course of the process evaluation. In case studies, using an epistemic position more aligned with Yin can enable research questions to be designed, which seek to expose patterns of unanticipated as well as expected relationships [ 9 ]. The OPAL trial was more closely aligned with Yin, where the research team predefined some of their theoretical assumptions, based on how the intervention was expected to work. The relevant parts of the data analysis then drew on data to support or refute the theoretical propositions. This was particularly useful for the trial as the prespecified theoretical propositions linked to the mechanisms of action on which the intervention was anticipated to have an effect (or not).

Tailoring to the trial design

Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [ 45 ]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s). In the DQIP trial, a stepped wedge trial, at least one process evaluation case, was sampled per cohort. Trials often continue to measure outcomes after delivery of the intervention has ceased, so researchers should also consider capturing ‘follow-up’ data on contextual factors, which may continue to influence the outcome measure. The OPAL trial had two active treatment arms so collected process data from both arms. In addition, as the trial was interested in long-term adherence, the trial and the process evaluation collected data from participants for 2 years after the intervention was initially delivered, providing 24 months follow-up data, in line with the primary outcome for the trial.

Defining the case

Case studies can include single or multiple cases in their design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis. Multiple case study sampling may be carried out in order to test for replication or contradiction [ 8 ]. Given that trials are often conducted over a number of sites, a multiple case study design is more sensible for process evaluations, as there is likely to be variation in implementation between sites. Case definition may occur at a variety of levels but is most appropriate if it reflects the trial design. For example, a case in an individual patient level trial is likely to be defined as a person/patient (e.g. a woman with urinary incontinence—OPAL trial) whereas in a cluster trial, a case is like to be a cluster, such as an organisation (e.g. a general practice—DQIP trial). Of course, the process evaluation could explore cases with less distinct boundaries, such as communities or relationships; however, the clarity with which these cases are defined is important, in order to scope the nature of the data that will be generated.

Carefully sampled cases are critical to a good case study as sampling helps inform the quality of the inferences that can be made from the data [ 53 ]. In both qualitative and quantitative research, how and how many participants to sample must be decided when planning the study. Quantitative sampling techniques generally aim to achieve a random sample. Qualitative research generally uses purposive samples to achieve data saturation, occurring when the incoming data produces little or no new information to address the research questions. The term data saturation has evolved from theoretical saturation in conventional grounded theory studies; however, its relevance to other types of studies is contentious as the term saturation seems to be widely used but poorly justified [ 54 ]. Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [ 55 , 56 ]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study).

Purposive sampling can improve the generalisability of the process evaluation by sampling for greater contextual diversity. The typical or average case is often not the richest source of information. Outliers can often reveal more important insights, because they may reflect the implementation of the intervention using different processes. Cases can be selected from a number of criteria, which are not mutually exclusive, to enable a rich and detailed picture to be built across sites [ 53 ]. To avoid the Hawthorne effect, it is recommended that process evaluations sample from both intervention and control sites, which enables comparison and explanation. There is always a trade-off between breadth and depth in sampling, so it is important to note that often quantity does not mean quality and that carefully sampled cases can provide powerful illustrative examples of how the intervention worked in practice, the relationship between the intervention and context and how and why they evolved together. The qualitative components of both DQIP and OPAL process evaluations aimed for maximum variation sampling. Please see Table  1 for further information on how DQIP’s sampling frame was important for providing contextual information on processes influencing effective implementation of the intervention.

Conceptual and theoretical framework

A conceptual or theoretical framework helps to frame data collection and analysis [ 57 ]. Theories can also underpin propositions, which can be tested in the process evaluation. Process evaluations produce intervention-dependent knowledge, and theories help make the research findings more generalizable by providing a common language [ 16 ]. There are a number of mid-range theories which have been designed to be used with process evaluation [ 34 , 35 , 58 ]. The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [ 45 ]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources. Furthermore, as Kislov and colleagues argue, we also have a responsibility to critique the theoretical framework underpinning the evaluation and refine theories to advance knowledge [ 59 ].

Data collection

An important consideration is what data to collect or measure and when. Case study methodology supports a range of data collection methods, both qualitative and quantitative, to best answer the research questions. As the aim of the case study is to gain an in-depth understanding of phenomena in context, methods are more commonly qualitative or mixed method in nature. Qualitative methods such as interviews, focus groups and observation offer rich descriptions of the setting, delivery of the intervention in each site and arm, how the intervention was perceived by the professionals delivering the intervention and the patients receiving the intervention. Quantitative methods can measure recruitment, fidelity and dose and establish which characteristics are associated with adoption, delivery and effectiveness. To ensure an understanding of the complexity of the relationship between the intervention and context, the case study should rely on multiple sources of data and triangulate these to confirm and corroborate the findings [ 8 ]. Process evaluations might consider using routine data collected in the trial across all sites and additional qualitative data across carefully sampled sites for a more nuanced picture within reasonable resource constraints. Mixed methods allow researchers to ask more complex questions and collect richer data than can be collected by one method alone [ 60 ]. The use of multiple sources of data allows data triangulation, which increases a study’s internal validity but also provides a more in-depth and holistic depiction of the case [ 20 ]. For example, in the DQIP process evaluation, the quantitative component used routinely collected data from all sites participating in the trial and purposively sampled cases for a more in-depth qualitative exploration [ 21 , 38 , 39 ].

The timing of data collection is crucial to study design, especially within a process evaluation where data collection can potentially influence the trial outcome. Process evaluations are generally in parallel or retrospective to the trial. The advantage of a retrospective design is that the evaluation itself is less likely to influence the trial outcome. However, the disadvantages include recall bias, lack of sensitivity to nuances and an inability to iteratively explore the relationship between intervention and outcome as it develops. To capture the dynamic relationship between intervention and context, the process evaluation needs to be parallel and longitudinal to the trial. Longitudinal methodological design is rare, but it is needed to capture the dynamic nature of implementation [ 40 ]. How the intervention is delivered is likely to change over time as it interacts with context. For example, as professionals deliver the intervention, they become more familiar with it, and it becomes more embedded into systems. The OPAL process evaluation was a longitudinal, mixed methods process evaluation where the quantitative component had been predefined and built into trial data collection systems. Data collection in both the qualitative and quantitative components mirrored the trial data collection points, which were longitudinal to capture adherence and contextual changes over time.

There is a lot of attention in the recent literature towards a systems approach to understanding interventions in context, which suggests interventions are ‘events within systems’ [ 61 , 62 ]. This framing highlights the dynamic nature of context, suggesting that interventions are an attempt to change systems dynamics. This conceptualisation would suggest that the study design should collect contextual data before and after implementation to assess the effect of the intervention on the context and vice versa.

Data analysis

Designing a rigorous analysis plan is particularly important for multiple case studies, where researchers must decide whether their approach to analysis is case or variable based. Case-based analysis is the most common, and analytic strategies must be clearly articulated for within and across case analysis. A multiple case study design can consist of multiple cases, where each case is analysed at the case level, or of multiple embedded cases, where data from all the cases are pulled together for analysis at some level. For example, OPAL analysis was at the case level, but all the cases for the intervention and control arms were pulled together at the arm level for more in-depth analysis and comparison. For Yin, analytical strategies rely on theoretical propositions, but for Stake, analysis works from the data to develop theory. In OPAL and DQIP, case summaries were written to summarise the cases and detail within-case analysis. Each of the studies structured these differently based on the phenomena of interest and the analytic technique. DQIP applied an approach more akin to Stake [ 9 ], with the cases summarised around inductive themes whereas OPAL applied a Yin [ 8 ] type approach using theoretical propositions around which the case summaries were structured. As the data for each case had been collected through longitudinal interviews, the case summaries were able to capture changes over time. It is beyond the scope of this paper to discuss different analytic techniques; however, to ensure the holistic examination of the intervention(s) in context, it is important to clearly articulate and demonstrate how data is integrated and synthesised [ 31 ].

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation [ 38 ]. Case study can enable comparisons within and across intervention and control arms and enable the evolving relationship between intervention and context to be captured holistically rather than considering processes in isolation. Utilising a longitudinal design can enable the dynamic relationship between context and intervention to be captured in real time. This information is fundamental to holistically explaining what intervention was implemented, understanding how and why the intervention worked or not and informing the transferability of the intervention into routine clinical practice.

Case study designs are not prescriptive, but process evaluations using case study should consider the purpose, trial design, the theories or assumptions underpinning the intervention, and the conceptual and theoretical frameworks informing the evaluation. We have discussed each of these considerations in turn, providing a comprehensive overview of issues for process evaluations using a case study design. There is no single or best way to conduct a process evaluation or a case study, but researchers need to make informed choices about the process evaluation design. Although this paper focuses on process evaluations, we recognise that case study design could also be useful during intervention development and feasibility trials. Elements of this paper are also applicable to other study designs involving trials.

Availability of data and materials

No data and materials were used.

Abbreviations

Data-driven Quality Improvement in Primary Care

Medical Research Council

Nonsteroidal anti-inflammatory drugs

Optimizing Pelvic Floor Muscle Exercises to Achieve Long-term benefits

Blencowe NB. Systematic review of intervention design and delivery in pragmatic and explanatory surgical randomized clinical trials. Br J Surg. 2015;102:1037–47.

Article   CAS   PubMed   Google Scholar  

Dixon-Woods M. The problem of context in quality improvement. In: Foundation TH, editor. Perspectives on context: The Health Foundation; 2014.

Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95.

Article   PubMed   PubMed Central   Google Scholar  

Grant A, Sullivan F, Dowell J. An ethnographic exploration of influences on prescribing in general practice: why is there variation in prescribing practices? Implement Sci. 2013;8(1):72.

Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355–63.

Article   PubMed   Google Scholar  

Ward V, House AF, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy. 2009;14(3):156–64.

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

Yin R. Case study research and applications: design and methods. Los Angeles: Sage Publications Inc; 2018.

Google Scholar  

Stake R. The art of case study research. Thousand Oaks, California: Sage Publications Ltd; 1995.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, et al. Process evaluation of complex interventions: Medical Research Council guidance. Br Med J. 2015;350.

Hawe P. Minimal, negligible and negligent interventions. Soc Sci Med. 2015;138:265–8.

Moore GF, Evans RE, Hawkins J, Littlecott H, Melendez-Torres GJ, Bonell C, Murphy S. From complex social interventions to interventions in complex social systems: future directions and unresolved questions for intervention development and evaluation. Evaluation. 2018;25(1):23–45.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, Greaves F, Harper L, Hawe P, Moore L, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4.

Moore G, Cambon L, Michie S, Arwidson P, Ninot G, Ferron C, Potvin L, Kellou N, Charlesworth J, Alla F, et al. Population health intervention research: the place of theories. Trials. 2019;20(1):285.

Kislov R. Engaging with theory: from theoretically informed to theoretically informative improvement research. BMJ Qual Saf. 2019;28(3):177–9.

Boulton R, Sandall J, Sevdalis N. The cultural politics of ‘Implementation Science’. J Med Human. 2020;41(3):379-94. h https://doi.org/10.1007/s10912-020-09607-9 .

Cheng KKF, Metcalfe A. Qualitative methods and process evaluation in clinical trials context: where to head to? Int J Qual Methods. 2018;17(1):1609406918774212.

Article   Google Scholar  

Richards DA, Bazeley P, Borglin G, Craig P, Emsley R, Frost J, Hill J, Horwood J, Hutchings HA, Jinks C, et al. Integrating quantitative and qualitative data and findings when undertaking randomised controlled trials. BMJ Open. 2019;9(11):e032081.

Thomas G. How to do your case study, 2nd edition edn. London: Sage Publications Ltd; 2016.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: case study evaluation of adoption and maintenance of a complex intervention to reduce high-risk primary care prescribing. BMJ Open. 2017;7(3).

Pfadenhauer L, Rohwer A, Burns J, Booth A, Lysdahl KB, Hofmann B, Gerhardus A, Mozygemba K, Tummers M, Wahlster P, et al. Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the Context and Implementation of Complex Interventions (CICI) framework: Integrate-HTA; 2016.

Bate P, Robert G, Fulop N, Ovretveit J, Dixon-Woods M. Perspectives on context. London: The Health Foundation; 2014.

Ovretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf. 2011;20.

Medical Research Council: Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2015.

May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11(1):141.

Bate P. Context is everything. In: Perpesctives on Context. The Health Foundation 2014.

Horton TJ, Illingworth JH, Warburton WHP. Overcoming challenges in codifying and replicating complex health care interventions. Health Aff. 2018;37(2):191–7.

O'Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G, McPherson R, Bunn H, Graham I, Drake E. A decision aid for women considering hormone therapy after menopause: decision support framework and evaluation. Patient Educ Couns. 1998;33:267–79.

Creswell J, Poth C. Qualiative inquiry and research design, fourth edition edn. Thousan Oaks, California: Sage Publications; 2018.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Takahashi ARW, Araujo L. Case study research: opening up research opportunities. RAUSP Manage J. 2020;55(1):100–11.

Tight M. Understanding case study research, small-scale research with meaning. London: Sage Publications; 2017.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalisation process theory. Sociology. 2009;43:535.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice. A consolidated framework for advancing implementation science. Implement Sci. 2009;4.

Pawson R, Tilley N. Realist evaluation. London: Sage; 1997.

Dreischulte T, Donnan P, Grant A, Hapca A, McCowan C, Guthrie B. Safer prescribing - a trial of education, informatics & financial incentives. N Engl J Med. 2016;374:1053–64.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing. Implement Sci. 2017;12(1):4.

Dreischulte T, Grant A, Hapca A, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: quantitative examination of variation between practices in recruitment, implementation and effectiveness. BMJ Open. 2018;8(1):e017133.

Grant A, Dean S, Hay-Smith J, Hagen S, McClurg D, Taylor A, Kovandzic M, Bugge C. Effectiveness and cost-effectiveness randomised controlled trial of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL (Optimising Pelvic Floor Exercises to Achieve Long-term benefits) trial mixed methods longitudinal qualitative case study and process evaluation. BMJ Open. 2019;9(2):e024152.

Hagen S, McClurg D, Bugge C, Hay-Smith J, Dean SG, Elders A, Glazener C, Abdel-fattah M, Agur WI, Booth J, et al. Effectiveness and cost-effectiveness of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL randomised trial. BMJ Open. 2019;9(2):e024153.

Steckler A, Linnan L. Process evaluation for public health interventions and research; 2002.

Durlak JA. Why programme implementation is so important. J Prev Intervent Commun. 1998;17(2):5–18.

Bonell C, Oakley A, Hargreaves J, VS, Rees R. Assessment of generalisability in trials of health interventions: suggested framework and systematic review. Br Med J. 2006;333(7563):346–9.

Article   CAS   Google Scholar  

Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14(1):15.

Yin R. Case study research: design and methods. London: Sage Publications; 2003.

Bugge C, Hay-Smith J, Grant A, Taylor A, Hagen S, McClurg D, Dean S: A 24 month longitudinal qualitative study of women’s experience of electromyography biofeedback pelvic floor muscle training (PFMT) and PFMT alone for urinary incontinence: adherence, outcome and context. ICS Gothenburg 2019 2019. https://www.ics.org/2019/abstract/473 . Access 10.9.2020.

Suzanne Hagen, Andrew Elders, Susan Stratton, Nicole Sergenson, Carol Bugge, Sarah Dean, Jean Hay-Smith, Mary Kilonzo, Maria Dimitrova, Mohamed Abdel-Fattah, Wael Agur, Jo Booth, Cathryn Glazener, Karen Guerrero, Alison McDonald, John Norrie, Louise R Williams, Doreen McClurg. Effectiveness of pelvic floor muscle training with and without electromyographic biofeedback for urinary incontinence in women: multicentre randomised controlled trial BMJ 2020;371. https://doi.org/10.1136/bmj.m3719 .

Cook TD. Emergent principles for the design, implementation, and analysis of cluster-based experiments in social science. Ann Am Acad Pol Soc Sci. 2005;599(1):176–98.

Hoffmann T, Glasziou P, Boutron I, Milne R, Perera R, Moher D. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J. 2014;348.

Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? Br Med J. 2004;328(7455):1561–3.

Grant A, Dreischulte T, Treweek S, Guthrie B. Study protocol of a mixed-methods evaluation of a cluster randomised trial to improve the safety of NSAID and antiplatelet prescribing: Data-driven Quality Improvement in Primary Care. Trials. 2012;13:154.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12(2):219–45.

Thorne S. The great saturation debate: what the “S word” means and doesn’t mean in qualitative research reporting. Can J Nurs Res. 2020;52(1):3–5.

Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guest G, Namey E, Chen M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.

Rycroft-Malone J. The PARIHS framework: a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;4:297-304.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

Cresswell JW, Plano Clark VL. Designing and conducting mixed methods research. Thousand Oaks: Sage Publications Ltd; 2007.

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

Craig P, Ruggiero E, Frohlich KL, Mykhalovskiy E, White M. Taking account of context in population health intervention research: guidance for producers, users and funders of research: National Institute for Health Research; 2018. https://www.ncbi.nlm.nih.gov/books/NBK498645/pdf/Bookshelf_NBK498645.pdf .

Download references

Acknowledgements

We would like to thank Professor Shaun Treweek for the discussions about context in trials.

No funding was received for this work.

Author information

Authors and affiliations.

School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB, UK

Aileen Grant

Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA, UK

Carol Bugge

Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP, UK

You can also search for this author in PubMed   Google Scholar

Contributions

AG, CB and MW conceptualised the study. AG wrote the paper. CB and MW commented on the drafts. All authors have approved the final manuscript.

Corresponding author

Correspondence to Aileen Grant .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval and consent to participate is not appropriate as no participants were included.

Consent for publication

Consent for publication is not required as no participants were included.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Grant, A., Bugge, C. & Wells, M. Designing process evaluations using case study to explore the context of complex interventions evaluated in trials. Trials 21 , 982 (2020). https://doi.org/10.1186/s13063-020-04880-4

Download citation

Received : 09 April 2020

Accepted : 06 November 2020

Published : 27 November 2020

DOI : https://doi.org/10.1186/s13063-020-04880-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Process evaluation
  • Case study design

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study approach evaluation

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Qualitative Research:...

Qualitative Research: Case study evaluation

  • Related content
  • Peer review
  • Justin Keen , research fellow, health economics research group a ,
  • Tim Packwood a
  • Brunel University, Uxbridge, Middlesex UB8 3PH
  • a Correspondence to: Dr Keen.

Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy.

This is the last in a series of seven articles describing non-quantitative techniques and showing their value in health research

  • Download figure
  • Open in new tab
  • Download powerpoint

Introduction

The medical approach to understanding disease has traditionally drawn heavily on qualitative data, and in particular on case studies to illustrate important or interesting phenomena. The tradition continues today, not least in regular case reports in this and other medical journals. Moreover, much of the everyday work of doctors and other health professionals still involves decisions that are qualitative rather than quantitative in nature.

This paper discusses the use of qualitative research methods, not in clinical care but in case study evaluations of health service interventions. It is useful for doctors to understand the principles guiding the design and conduct of these evaluations, because they are frequently used by both researchers and inspectorial agencies (such as the Audit Commission in the United Kingdom and the Office of Technology Assessment in the United States) to investigate the work of doctors and other health professionals.

We briefly discuss the circumstances in which case study research can usefully be undertaken in health service settings and the ways in which qualitative methods are used within case studies. Examples show how qualitative methods are applied, both in purely qualitative studies and alongside quantitative methods.

Case study evaluations

Doctors often find themselves asking important practical questions, such as should we be involved in the management of hospitals and, if so, how? how will new government policies affect the lives of our patients? and how can we cope with changes in practice in our local setting? There are, broadly, two ways in which such questions can usefully be addressed. One is to analyse the proposed policies themselves, by investigating whether they are internally consistent and by using theoretical frameworks to predict their effects on the ground. National policies, including the implementation of the NHS internal market 1 and the new community care arrangements 2 have been examined in this way by using economic theory to analyse their likely consequences.

The other approach, and the focus of this article, is to study implementation empirically. Empirical evaluative studies are concerned with placing a value on an intervention or policy change, and they typically involve forming judgments, firstly about the appropriateness of an intervention for those concerned (and often by implication also for the NHS as a whole) and, secondly about whether the outputs and outcomes of interventions are justified by their inputs and processes.

Case study evaluations are valuable where broad, complex questions have to be addressed in complex circumstances. No one method is sufficient to capture all salient aspects of an intervention, and case studies typically use multiple methods.

The methods used in case studies may be qualitative or quantitative, depending on the circumstances. Case studies using qualitative methods are most valuable when the question being posed requires an investigation of a real life intervention in detail, where the focus is on how and why the intervention succeeds or fails, where the general context will influence the outcome and where researchers asking the questions will have no control over events. As a result, the number of relevant variables will be far greater than can be controlled for, so that experimental approaches are simply not appropriate.

Other conditions that enhance the value of the case study approach concern the nature of the intervention being investigated. Often an intervention is ill defined, at least at the outset, and so cannot easily be distinguished from the general environment. Even where it is well defined, an intervention may not be discrete but consist of a complex mix of changes that occur over different timescales. This is a pervasive problem in health services in many countries, which are experiencing many parallel and interrelated changes. The doctor weighing up whether or how to become involved in hospital management would have to assess the various impacts on the managerial role of clinical audit, resource management, consultant job plans, and a raft of government legislation. Secondly, any intervention will typically depend for its success on the involvement of several different interested groups. Each group may have a legitimate, but different, interpretation of events; capturing these different views is often best achieved by using interviews or other qualitative methods within a case study design. Thirdly, it is not clear at the outset whether an intervention will be fully implemented by the end of a study period--accounts of major computer system failures show this. 3 Yet study of these failures may provide invaluable clues for future success.

Taken together, these conditions exclude experimental approaches to evaluation. The case study is an alternative approach--in effect, a different way of thinking about complex situations which takes the conditions into account, but is nevertheless rigorous and facilitates informed judgments about success or failure.

The design of case studies

As noted earlier, case studies using qualitative methods are used by bodies that inspect and regulate public services. Examples include the work of the National Audit Office and the Audit Commission 4 in the United Kingdom and the Office of Technology Assessment in the United States. 5 Sometimes these studies are retrospective, particularly in investigations of failed implementations of policies. Increasingly, though, these bodies use prospective studies designed to investigate the extent to which centrally determined standards or initiatives have been implemented. For example, the National Audit Office recently examined hospital catering in England, focusing on the existence of, and monitoring of, standards as required by the citizen's charter and on the application of central policy and guidance in the areas of nutritional standards and cost control. 6

Prospective studies have also been used by academic researchers, for example, to evaluate the introduction of general management 7 in Britain after the Griffiths report, 8 in the studies of specific changes following the 1989 NHS review 9 which were commissioned by the King's Fund, 10 and in the introduction of total quality management in hospitals in the United States. 11 In these cases the investigators were interested in understanding what happened in a complex environment where they had no control over events. Their research questions emerged from widespread concerns about the implications of new policies or management theories, and were investigated with the most appropriate methods at their disposal.

THE NATURE OF RESEARCH QUESTIONS

Once a broad research question has been identified, there are two approaches to the design of case study research, with appropriateness depending on the circumstances. In the first approach, precise questions are posed at the outset of the research and data collection and analysis are directed towards answering them. These studies are typically constructed to allow comparisons to be drawn. 12 The comparison may be between different approaches to implementation, or a comparison between sites where an intervention is taking place and ones where normal practice prevails.

An example is the recent study by Glennerster et al of the implementation of general practitioner fundholding. 13 Starting with a broad question about the value of general practitioner fundholding, the researchers narrowed down to precise questions about the extent to which the fundholding scheme promoted efficiency and preserved equity. They used one qualitative method, semistructured interviews, with the general practitioners and practice managers and also with people responsible for implementing the policy at national and regional level. The interviews were complemented by the collection of quantitative data such as financial information from the practices (box 1).

Box 1 Outline of case study of GPfundholding 13

Mix of qualitative and quantitative methods

Fundholding and non-fundholding practices

Programme of interviews with key staff at practices

Interviews with people responsible for imple-menting national policy

Study found that the general practitioner fund-holding scheme was achieving the aims set for it bygovernment and that adverse selection (“creamskimming”) of patients was less likely than some commentators had feared

The second approach is more open and in effect starts by asking broad questions such as what is happening here? and, what are the important features and relationships that explain the impact of this intervention? These questions are then refined and become more specific in the course of fieldwork and a parallel process of data analysis. This type of design, in which the eventual research questions emerge during the research, is termed ethnography and has been advocated for use in the study of the impact of government policies in the health system. 14 15 In some ways it is similar to the way in which consultations are conducted in that it involves initial exploration, progressing over time towards a diagnosis inferred from the available data.

The evaluation of resource management in the NHS, 16 which investigated the progress of six pilot hospitals in implementing new management arrangements, focused particularly on identifying ways in which doctors and general managers could jointly control the allocation and commitment of resources (box 2). At the outset the nature of resource management was unclear--sites were charged with finding ways of involving doctors in management, but how this would be achieved and, if achieved, how successful it would be in improving patient care were open questions. The researchers selected major specialties within each site and conducted interviews with relevant staff, observed meetings, and analysed documentation. Over time, the data were used to develop a framework which captured the essential features of resource management at the time and which was used to evaluate each site's progress in implementing it.

Box 2 Evaluation of resourcemanagement 16

Six hospitals, a mix of teaching and non-teaching

Focus on major specialties: general surgery and general medicine

Methods and data sources independent of each other

Qualitative methods included interviews, non-participant observation of meetings, analysis of documentation

Evaluation found that there were important changes in management processes, but little evidence of improvement in patient care

SELECTION OF SITES

The process of selecting sites for study is central to the case study approach. Researchers have developed a number of selection strategies, the objectives of which, as in any good research study, are to ensure that misinterpretation of results is as far as possible avoided. Criteria include the selection of cases that are typical of the phenomenon being investigated, those in which a specific theory can be tested, or those that will confirm or refute a hypothesis.

Researchers will benefit from expert advice from those with knowledge of the subject being investigated, and they can usefully build into the initial research design the possibility of testing findings at further sites. Replication of results across sites helps to ensure that findings are not due to characteristics of particular sites; hence it increases external validity. 17

SELECTION OF METHODS

The next step is to select research methods, the process being driven by criteria of validity and reliability. 18 A distinctive but not unique feature of case study research is the use of multiple methods and sources of evidence to establish construct validity. The use of particular methods is discussed in other papers in this series; the validity and reliability of individual methods is discussed in more detail by Mays and Pope. 19

Case studies often use triangulation 20 to ensure the validity of findings. In triangulation all data items are corroborated from at least one other source and normally by another method of data collection. The fundholding study referred to earlier 13 used interviews in combination with several different quantitative sources of data to establish an overall picture. The evaluation of resource management, in contrast, used a wider range of qualitative and quantitative methods. 16

Case studies are used by bodiesthat inspect public services--to monitor standards in hospital catering, for example

Any one of these methods by itself might have produced results of weak validity, but the different methods were used to obtain data from different sources. When they all suggested the emergence of an important development, therefore, they acted to strengthen the researchers' belief in the validity of their observations.

Another technique is to construct chains of evidence; these are conceptual arguments that link phenomena to one another in the following manner: “if this occurs then some other thing would be expected to occur; and if not, then it would not be expected.” For example, if quantitative evidence suggested that there had been an increase or decrease in admission rates in several specialties within a resource management site and if an interview programme revealed that the involvement of doctors in management (if developed as part of the resource management initiative) had led to a higher level of coordination of admissions policies, then this is evidence that resource management may facilitate the introduction of such policies. This type of argument is not always appropriate, but it can be valuable where it is important to investigate causation in complex environments.

ANALYTICAL FRAMEWORKS

The collection of data should be directed towards the development of an analytical framework that will facilitate interpretation of findings. Again, there are several ways in which this might be done. In the study of fundholding 13 the data were organised to “test” hypotheses which were derived from pre-existing economic theories. In the case of resource management there was no obvious pre-existing theory that could be used; the development of a framework during the study was crucial to help organise and evaluate the data collected. The framework was not imposed on the data but derived from it in an iterative process over the course of the evaluation; each was used to refine the other over time (box 3). 15

Framework: five interrelated elements of resource management 16

The target should be a reduction in the consumption itself

Commitment to resource management by the relevant personnel at each level in the organisation

Devolution of authority for the management ofresources

Collaboration within and between disciplines insecuring the objectives of resource management

Management infrastructure, particularly in termsof organisational structure and provision of information

A clear focus for the local resource management strategy

The investigator is finally left with the difficult task of making a judgment about the findings of a study. The purpose of the steps in designing and building the case study research is to maximise confidence in the findings, but interpretation inevitably involves value judgments. The findings may well include divergences of opinion among those involved about the value of the intervention, and the results will often point towards different conclusions.

The extent to which research findings can be assembled into a single coherent account of events varies widely. In some circumstances widely differing opinions are themselves very important and should be reflected in any report. Where an evaluation is designed to inform policy making, however, some attempt has to be made at an overall judgment of success or failure; this was the case in the evaluation of resource management, where it was important to indicate to policy makers and the NHS whether it was worth while.

The complexity of the issues that health professionals have to deal with and the increasing recognition by policy makers, academics, and practitioners of the value of case studies in evaluating health service interventions suggest that the use of such studies is likely to increase in the future. Qualitative methods can be used within case study designs to address many practical and policy questions that impinge on the lives of professionals, particularly where those questions are concerned with how or why events take a particular course.

  • Committee of Public Accounts
  • Audit Commission
  • Office of Technology Assessment
  • National Audit Office
  • Pollitt C ,
  • Harrison S ,
  • Griffiths R
  • Secretaries of State
  • Robinson R ,
  • Berwick D ,
  • Godfrey AB ,
  • St Leger A ,
  • Schneider H ,
  • Walsworth-Bell J
  • Glennerster H ,
  • Matsaganis M ,
  • Packwood T ,

case study approach evaluation

  • Open access
  • Published: 10 November 2020

Case study research for better evaluations of complex interventions: rationale and challenges

  • Sara Paparini   ORCID: orcid.org/0000-0002-1909-2481 1 ,
  • Judith Green 2 ,
  • Chrysanthi Papoutsi 1 ,
  • Jamie Murdoch 3 ,
  • Mark Petticrew 4 ,
  • Trish Greenhalgh 1 ,
  • Benjamin Hanckel 5 &
  • Sara Shaw 1  

BMC Medicine volume  18 , Article number:  301 ( 2020 ) Cite this article

18k Accesses

41 Citations

35 Altmetric

Metrics details

The need for better methods for evaluation in health research has been widely recognised. The ‘complexity turn’ has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link interventions and outcomes. We argue that case study research—currently denigrated as poor evidence—is an under-utilised resource for not only providing evidence about context and transferability, but also for helping strengthen causal inferences when pathways between intervention and effects are likely to be non-linear.

Case study research, as an overall approach, is based on in-depth explorations of complex phenomena in their natural, or real-life, settings. Empirical case studies typically enable dynamic understanding of complex challenges and provide evidence about causal mechanisms and the necessary and sufficient conditions (contexts) for intervention implementation and effects. This is essential evidence not just for researchers concerned about internal and external validity, but also research users in policy and practice who need to know what the likely effects of complex programmes or interventions will be in their settings. The health sciences have much to learn from scholarship on case study methodology in the social sciences. However, there are multiple challenges in fully exploiting the potential learning from case study research. First are misconceptions that case study research can only provide exploratory or descriptive evidence. Second, there is little consensus about what a case study is, and considerable diversity in how empirical case studies are conducted and reported. Finally, as case study researchers typically (and appropriately) focus on thick description (that captures contextual detail), it can be challenging to identify the key messages related to intervention evaluation from case study reports.

Whilst the diversity of published case studies in health services and public health research is rich and productive, we recommend further clarity and specific methodological guidance for those reporting case study research for evaluation audiences.

Peer Review reports

The need for methodological development to address the most urgent challenges in health research has been well-documented. Many of the most pressing questions for public health research, where the focus is on system-level determinants [ 1 , 2 ], and for health services research, where provisions typically vary across sites and are provided through interlocking networks of services [ 3 ], require methodological approaches that can attend to complexity. The need for methodological advance has arisen, in part, as a result of the diminishing returns from randomised controlled trials (RCTs) where they have been used to answer questions about the effects of interventions in complex systems [ 4 , 5 , 6 ]. In conditions of complexity, there is limited value in maintaining the current orientation to experimental trial designs in the health sciences as providing ‘gold standard’ evidence of effect.

There are increasing calls for methodological pluralism [ 7 , 8 ], with the recognition that complex intervention and context are not easily or usefully separated (as is often the situation when using trial design), and that system interruptions may have effects that are not reducible to linear causal pathways between intervention and outcome. These calls are reflected in a shifting and contested discourse of trial design, seen with the emergence of realist [ 9 ], adaptive and hybrid (types 1, 2 and 3) [ 10 , 11 ] trials that blend studies of effectiveness with a close consideration of the contexts of implementation. Similarly, process evaluation has now become a core component of complex healthcare intervention trials, reflected in MRC guidance on how to explore implementation, causal mechanisms and context [ 12 ].

Evidence about the context of an intervention is crucial for questions of external validity. As Woolcock [ 4 ] notes, even if RCT designs are accepted as robust for maximising internal validity, questions of transferability (how well the intervention works in different contexts) and generalisability (how well the intervention can be scaled up) remain unanswered [ 5 , 13 ]. For research evidence to have impact on policy and systems organisation, and thus to improve population and patient health, there is an urgent need for better methods for strengthening external validity, including a better understanding of the relationship between intervention and context [ 14 ].

Policymakers, healthcare commissioners and other research users require credible evidence of relevance to their settings and populations [ 15 ], to perform what Rosengarten and Savransky [ 16 ] call ‘careful abstraction’ to the locales that matter for them. They also require robust evidence for understanding complex causal pathways. Case study research, currently under-utilised in public health and health services evaluation, can offer considerable potential for strengthening faith in both external and internal validity. For example, in an empirical case study of how the policy of free bus travel had specific health effects in London, UK, a quasi-experimental evaluation (led by JG) identified how important aspects of context (a good public transport system) and intervention (that it was universal) were necessary conditions for the observed effects, thus providing useful, actionable evidence for decision-makers in other contexts [ 17 ].

The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or ‘real-life’, settings. Empirical case studies typically enable dynamic understanding of complex challenges rather than restricting the focus on narrow problem delineations and simple fixes. Case study research is a diverse and somewhat contested field, with multiple definitions and perspectives grounded in different ways of viewing the world, and involving different combinations of methods. In this paper, we raise awareness of such plurality and highlight the contribution that case study research can make to the evaluation of complex system-level interventions. We review some of the challenges in exploiting the current evidence base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health sciences, can enhance the take-up of evidence from case study research.

Case study research offers evidence about context, causal inference in complex systems and implementation

Well-conducted and described empirical case studies provide evidence on context, complexity and mechanisms for understanding how, where and why interventions have their observed effects. Recognition of the importance of context for understanding the relationships between interventions and outcomes is hardly new. In 1943, Canguilhem berated an over-reliance on experimental designs for determining universal physiological laws: ‘As if one could determine a phenomenon’s essence apart from its conditions! As if conditions were a mask or frame which changed neither the face nor the picture!’ ([ 18 ] p126). More recently, a concern with context has been expressed in health systems and public health research as part of what has been called the ‘complexity turn’ [ 1 ]: a recognition that many of the most enduring challenges for developing an evidence base require a consideration of system-level effects [ 1 ] and the conceptualisation of interventions as interruptions in systems [ 19 ].

The case study approach is widely recognised as offering an invaluable resource for understanding the dynamic and evolving influence of context on complex, system-level interventions [ 20 , 21 , 22 , 23 ]. Empirically, case studies can directly inform assessments of where, when, how and for whom interventions might be successfully implemented, by helping to specify the necessary and sufficient conditions under which interventions might have effects and to consolidate learning on how interdependencies, emergence and unpredictability can be managed to achieve and sustain desired effects. Case study research has the potential to address four objectives for improving research and reporting of context recently set out by guidance on taking account of context in population health research [ 24 ], that is to (1) improve the appropriateness of intervention development for specific contexts, (2) improve understanding of ‘how’ interventions work, (3) better understand how and why impacts vary across contexts and (4) ensure reports of intervention studies are most useful for decision-makers and researchers.

However, evaluations of complex healthcare interventions have arguably not exploited the full potential of case study research and can learn much from other disciplines. For evaluative research, exploratory case studies have had a traditional role of providing data on ‘process’, or initial ‘hypothesis-generating’ scoping, but might also have an increasing salience for explanatory aims. Across the social and political sciences, different kinds of case studies are undertaken to meet diverse aims (description, exploration or explanation) and across different scales (from small N qualitative studies that aim to elucidate processes, or provide thick description, to more systematic techniques designed for medium-to-large N cases).

Case studies with explanatory aims vary in terms of their positioning within mixed-methods projects, with designs including (but not restricted to) (1) single N of 1 studies of interventions in specific contexts, where the overall design is a case study that may incorporate one or more (randomised or not) comparisons over time and between variables within the case; (2) a series of cases conducted or synthesised to provide explanation from variations between cases; and (3) case studies of particular settings within RCT or quasi-experimental designs to explore variation in effects or implementation.

Detailed qualitative research (typically done as ‘case studies’ within process evaluations) provides evidence for the plausibility of mechanisms [ 25 ], offering theoretical generalisations for how interventions may function under different conditions. Although RCT designs reduce many threats to internal validity, the mechanisms of effect remain opaque, particularly when the causal pathways between ‘intervention’ and ‘effect’ are long and potentially non-linear: case study research has a more fundamental role here, in providing detailed observational evidence for causal claims [ 26 ] as well as producing a rich, nuanced picture of tensions and multiple perspectives [ 8 ].

Longitudinal or cross-case analysis may be best suited for evidence generation in system-level evaluative research. Turner [ 27 ], for instance, reflecting on the complex processes in major system change, has argued for the need for methods that integrate learning across cases, to develop theoretical knowledge that would enable inferences beyond the single case, and to develop generalisable theory about organisational and structural change in health systems. Qualitative Comparative Analysis (QCA) [ 28 ] is one such formal method for deriving causal claims, using set theory mathematics to integrate data from empirical case studies to answer questions about the configurations of causal pathways linking conditions to outcomes [ 29 , 30 ].

Nonetheless, the single N case study, too, provides opportunities for theoretical development [ 31 ], and theoretical generalisation or analytical refinement [ 32 ]. How ‘the case’ and ‘context’ are conceptualised is crucial here. Findings from the single case may seem to be confined to its intrinsic particularities in a specific and distinct context [ 33 ]. However, if such context is viewed as exemplifying wider social and political forces, the single case can be ‘telling’, rather than ‘typical’, and offer insight into a wider issue [ 34 ]. Internal comparisons within the case can offer rich possibilities for logical inferences about causation [ 17 ]. Further, case studies of any size can be used for theory testing through refutation [ 22 ]. The potential lies, then, in utilising the strengths and plurality of case study to support theory-driven research within different methodological paradigms.

Evaluation research in health has much to learn from a range of social sciences where case study methodology has been used to develop various kinds of causal inference. For instance, Gerring [ 35 ] expands on the within-case variations utilised to make causal claims. For Gerring [ 35 ], case studies come into their own with regard to invariant or strong causal claims (such as X is a necessary and/or sufficient condition for Y) rather than for probabilistic causal claims. For the latter (where experimental methods might have an advantage in estimating effect sizes), case studies offer evidence on mechanisms: from observations of X affecting Y, from process tracing or from pattern matching. Case studies also support the study of emergent causation, that is, the multiple interacting properties that account for particular and unexpected outcomes in complex systems, such as in healthcare [ 8 ].

Finally, efficacy (or beliefs about efficacy) is not the only contributor to intervention uptake, with a range of organisational and policy contingencies affecting whether an intervention is likely to be rolled out in practice. Case study research is, therefore, invaluable for learning about contextual contingencies and identifying the conditions necessary for interventions to become normalised (i.e. implemented routinely) in practice [ 36 ].

The challenges in exploiting evidence from case study research

At present, there are significant challenges in exploiting the benefits of case study research in evaluative health research, which relate to status, definition and reporting. Case study research has been marginalised at the bottom of an evidence hierarchy, seen to offer little by way of explanatory power, if nonetheless useful for adding descriptive data on process or providing useful illustrations for policymakers [ 37 ]. This is an opportune moment to revisit this low status. As health researchers are increasingly charged with evaluating ‘natural experiments’—the use of face masks in the response to the COVID-19 pandemic being a recent example [ 38 ]—rather than interventions that take place in settings that can be controlled, research approaches using methods to strengthen causal inference that does not require randomisation become more relevant.

A second challenge for improving the use of case study evidence in evaluative health research is that, as we have seen, what is meant by ‘case study’ varies widely, not only across but also within disciplines. There is indeed little consensus amongst methodologists as to how to define ‘a case study’. Definitions focus, variously, on small sample size or lack of control over the intervention (e.g. [ 39 ] p194), on in-depth study and context [ 40 , 41 ], on the logic of inference used [ 35 ] or on distinct research strategies which incorporate a number of methods to address questions of ‘how’ and ‘why’ [ 42 ]. Moreover, definitions developed for specific disciplines do not capture the range of ways in which case study research is carried out across disciplines. Multiple definitions of case study reflect the richness and diversity of the approach. However, evidence suggests that a lack of consensus across methodologists results in some of the limitations of published reports of empirical case studies [ 43 , 44 ]. Hyett and colleagues [ 43 ], for instance, reviewing reports in qualitative journals, found little match between methodological definitions of case study research and how authors used the term.

This raises the third challenge we identify that case study reports are typically not written in ways that are accessible or useful for the evaluation research community and policymakers. Case studies may not appear in journals widely read by those in the health sciences, either because space constraints preclude the reporting of rich, thick descriptions, or because of the reported lack of willingness of some biomedical journals to publish research that uses qualitative methods [ 45 ], signalling the persistence of the aforementioned evidence hierarchy. Where they do, however, the term ‘case study’ is used to indicate, interchangeably, a qualitative study, an N of 1 sample, or a multi-method, in-depth analysis of one example from a population of phenomena. Definitions of what constitutes the ‘case’ are frequently lacking and appear to be used as a synonym for the settings in which the research is conducted. Despite offering insights for evaluation, the primary aims may not have been evaluative, so the implications may not be explicitly drawn out. Indeed, some case study reports might properly be aiming for thick description without necessarily seeking to inform about context or causality.

Acknowledging plurality and developing guidance

We recognise that definitional and methodological plurality is not only inevitable, but also a necessary and creative reflection of the very different epistemological and disciplinary origins of health researchers, and the aims they have in doing and reporting case study research. Indeed, to provide some clarity, Thomas [ 46 ] has suggested a typology of subject/purpose/approach/process for classifying aims (e.g. evaluative or exploratory), sample rationale and selection and methods for data generation of case studies. We also recognise that the diversity of methods used in case study research, and the necessary focus on narrative reporting, does not lend itself to straightforward development of formal quality or reporting criteria.

Existing checklists for reporting case study research from the social sciences—for example Lincoln and Guba’s [ 47 ] and Stake’s [ 33 ]—are primarily orientated to the quality of narrative produced, and the extent to which they encapsulate thick description, rather than the more pragmatic issues of implications for intervention effects. Those designed for clinical settings, such as the CARE (CAse REports) guidelines, provide specific reporting guidelines for medical case reports about single, or small groups of patients [ 48 ], not for case study research.

The Design of Case Study Research in Health Care (DESCARTE) model [ 44 ] suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and robustness of qualitative and mixed-methods research reporting, and it is usefully open-ended and non-prescriptive. However, even if it does include some reflections on context, the model does not fully address aspects of context, logic and causal inference that are perhaps most relevant for evaluative research in health.

Hence, for evaluative research where the aim is to report empirical findings in ways that are intended to be pragmatically useful for health policy and practice, this may be an opportune time to consider how to best navigate plurality around what is (minimally) important to report when publishing empirical case studies, especially with regards to the complex relationships between context and interventions, information that case study research is well placed to provide.

The conventional scientific quest for certainty, predictability and linear causality (maximised in RCT designs) has to be augmented by the study of uncertainty, unpredictability and emergent causality [ 8 ] in complex systems. This will require methodological pluralism, and openness to broadening the evidence base to better understand both causality in and the transferability of system change intervention [ 14 , 20 , 23 , 25 ]. Case study research evidence is essential, yet is currently under exploited in the health sciences. If evaluative health research is to move beyond the current impasse on methods for understanding interventions as interruptions in complex systems, we need to consider in more detail how researchers can conduct and report empirical case studies which do aim to elucidate the contextual factors which interact with interventions to produce particular effects. To this end, supported by the UK’s Medical Research Council, we are embracing the challenge to develop guidance for case study researchers studying complex interventions. Following a meta-narrative review of the literature, we are planning a Delphi study to inform guidance that will, at minimum, cover the value of case study research for evaluating the interrelationship between context and complex system-level interventions; for situating and defining ‘the case’, and generalising from case studies; as well as provide specific guidance on conducting, analysing and reporting case study research. Our hope is that such guidance can support researchers evaluating interventions in complex systems to better exploit the diversity and richness of case study research.

Availability of data and materials

Not applicable (article based on existing available academic publications)

Abbreviations

Qualitative comparative analysis

Quasi-experimental design

Randomised controlled trial

Diez Roux AV. Complex systems thinking and current impasses in health disparities research. Am J Public Health. 2011;101(9):1627–34.

Article   Google Scholar  

Ogilvie D, Mitchell R, Mutrie N, M P, Platt S. Evaluating health effects of transport interventions: methodologic case study. Am J Prev Med 2006;31:118–126.

Walshe C. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies. Palliat Med. 2011;25(8):774–81.

Woolcock M. Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation. 2013;19:229–48.

Cartwright N. Are RCTs the gold standard? BioSocieties. 2007;2(1):11–20.

Deaton A, Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc Sci Med. 2018;210:2–21.

Salway S, Green J. Towards a critical complex systems approach to public health. Crit Public Health. 2017;27(5):523–4.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Bonell C, Warren E, Fletcher A. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials. 2016;17:478.

Pallmann P, Bedding AW, Choodari-Oskooei B. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29.

Curran G, Bauer M, Mittman B, Pyne J, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. https://doi.org/10.1097/MLR.0b013e3182408812 .

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015 [cited 2020 Jun 27];350. Available from: https://www.bmj.com/content/350/bmj.h1258 .

Evans RE, Craig P, Hoddinott P, Littlecott H, Moore L, Murphy S, et al. When and how do ‘effective’ interventions need to be adapted and/or re-evaluated in new contexts? The need for guidance. J Epidemiol Community Health. 2019;73(6):481–2.

Shoveller J. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.

Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10(1):37.

Rosengarten M, Savransky M. A careful biomedicine? Generalization and abstraction in RCTs. Crit Public Health. 2019;29(2):181–91.

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406.

Canguilhem G. The normal and the pathological. New York: Zone Books; 1991. (1949).

Google Scholar  

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

King G, Keohane RO, Verba S. Designing social inquiry: scientific inference in qualitative research: Princeton University Press; 1994.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

Yin R. Enhancing the quality of case studies in health services research. Health Serv Res. 1999;34(5 Pt 2):1209.

CAS   PubMed   PubMed Central   Google Scholar  

Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016 [cited 2020 Jun 30];4(16). Available from: https://www.journalslibrary.nihr.ac.uk/hsdr/hsdr04160#/abstract .

Craig P, Di Ruggiero E, Frohlich KL, E M, White M, Group CCGA. Taking account of context in population health intervention research: guidance for producers, users and funders of research. NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32.

Mahoney J. Strategies of causal inference in small-N analysis. Sociol Methods Res. 2000;4:387–424.

Turner S. Major system change: a management and organisational research perspective. In: Rosalind Raine, Ray Fitzpatrick, Helen Barratt, Gywn Bevan, Nick Black, Ruth Boaden, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016;4(16) 2016. https://doi.org/10.3310/hsdr04160.

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225.

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. 369 p.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12:219–45.

Tsoukas H. Craving for generality and small-N studies: a Wittgensteinian approach towards the epistemology of the particular in organization and management studies. Sage Handb Organ Res Methods. 2009:285–301.

Stake RE. The art of case study research. London: Sage Publications Ltd; 1995.

Mitchell JC. Typicality and the case study. Ethnographic research: A guide to general conduct. Vol. 238241. 1984.

Gerring J. What is a case study and what is it good for? Am Polit Sci Rev. 2004;98(2):341–54.

May C, Mort M, Williams T, F M, Gask L. Health technology assessment in its local contexts: studies of telehealthcare. Soc Sci Med 2003;57:697–710.

McGill E. Trading quality for relevance: non-health decision-makers’ use of evidence on the social determinants of health. BMJ Open. 2015;5(4):007053.

Greenhalgh T. We can’t be 100% sure face masks work – but that shouldn’t stop us wearing them | Trish Greenhalgh. The Guardian. 2020 [cited 2020 Jun 27]; Available from: https://www.theguardian.com/commentisfree/2020/jun/05/face-masks-coronavirus .

Hammersley M. So, what are case studies? In: What’s wrong with ethnography? New York: Routledge; 1992.

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11(1):100.

Luck L, Jackson D, Usher K. Case study: a bridge across the paradigms. Nurs Inq. 2006;13(2):103–9.

Yin RK. Case study research and applications: design and methods: Sage; 2017.

Hyett N, A K, Dickson-Swift V. Methodology or method? A critical review of qualitative case study reports. Int J Qual Stud Health Well-Being. 2014;9:23606.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. Bmj. 2016;352.

Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qual Inq. 2011;17(6):511–21.

Lincoln YS, Guba EG. Judging the quality of case study reports. Int J Qual Stud Educ. 1990;3(1):53–9.

Riley DS, Barber MS, Kienle GS, Aronson JK, Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. 2017;89:218–35.

Download references

Acknowledgements

Not applicable

This work was funded by the Medical Research Council - MRC Award MR/S014632/1 HCS: Case study, Context and Complex interventions (TRIPLE C). SP was additionally funded by the University of Oxford's Higher Education Innovation Fund (HEIF).

Author information

Authors and affiliations.

Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Sara Paparini, Chrysanthi Papoutsi, Trish Greenhalgh & Sara Shaw

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

School of Health Sciences, University of East Anglia, Norwich, UK

Jamie Murdoch

Public Health, Environments and Society, London School of Hygiene & Tropical Medicin, London, UK

Mark Petticrew

Institute for Culture and Society, Western Sydney University, Penrith, Australia

Benjamin Hanckel

You can also search for this author in PubMed   Google Scholar

Contributions

JG, MP, SP, JM, TG, CP and SS drafted the initial paper; all authors contributed to the drafting of the final version, and read and approved the final manuscript.

Corresponding author

Correspondence to Sara Paparini .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Paparini, S., Green, J., Papoutsi, C. et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med 18 , 301 (2020). https://doi.org/10.1186/s12916-020-01777-6

Download citation

Received : 03 July 2020

Accepted : 07 September 2020

Published : 10 November 2020

DOI : https://doi.org/10.1186/s12916-020-01777-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • Case studies
  • Mixed-method
  • Public health
  • Health services research
  • Interventions

BMC Medicine

ISSN: 1741-7015

case study approach evaluation

Site logo

  • MONITORING AND EVALUATION APPROACHES
  • Learning Center

Monitoring and Evaluation Approaches

Monitoring and evaluation (M&E) are two essential components of project management that help organizations assess the progress and effectiveness of their programs. Monitoring and evaluation approaches are essential for any organization for measuring the progress and success of any project or program. Evaluation approaches have often been developed to address specific evaluation questions or challenges and they refer to an integrated package of methods and processes.

Table of contents

Results-based monitoring and evaluation approach

Participatory monitoring and evaluation approach, theory-based evaluation approach.

  • Utilization-focused evaluation approach

M&E for learning

  • Gender-responsive evaluation

Case study evaluation approach

Process monitoring and evaluation approach, impact evaluation approach.

  • Evaluation approaches versus evaluation methods

Conclusion on monitoring and evaluation approaches

This approach involves setting specific, measurable, achievable, relevant, and time-bound (SMART) indicators for a project and tracking progress against these indicators. It emphasizes the importance of measuring outcomes and impact rather than just activities. Results-based monitoring and evaluation (M&E) approaches can provide the insight needed to evaluate performance and strategy. Results-based M&E involves collecting and analyzing data to assess the impact of programs and identify areas for improvement. It helps organizations understand where they need to focus their resources, and allows them to ensure that projects are meeting established goals. Results-based M&E is an invaluable tool for ensuring efficiency, effectiveness and accountability in any organization’s operations. Read more .

This approach involves involving stakeholders, including beneficiaries, in the monitoring and evaluation process. It can help ensure that the evaluation is sensitive to the needs of those who are intended to benefit from the project. It provides an insight into the progress of the program or project and helps to identify problems that need immediate attention. Participatory monitoring and evaluation approaches help to ensure that all stakeholders are engaged in the evaluation process, bringing a wider perspective and enabling more effective feedback. Through this method, progress and impact can be better understood, allowing for better decisions in order to reach desired outcomes. Participatory approaches are therefore an important part of monitoring and evaluation for any project or program. Read more .

This approach involves examining the underlying theory of change that a project is based on to determine whether the assumptions about how the project will work are valid. It can help identify what changes are likely to occur and how they can be measured. The Theory-based Evaluation approach is a powerful monitoring and evaluation tool that can help organizations make informed decisions about their programs and services. This approach focuses on the underlying theories of change that drive program implementation and outcomes, and helps to identify and address gaps in the program’s effectiveness. It also serves as a way to measure the progress of a program and its impact on the target population. Theory-based evaluation is a comprehensive approach that considers both qualitative and quantitative data, and is useful for understanding the complex relationships between program activities and outcomes. It is an important tool for organizations to ensure that their programs are achieving their intended goals and objectives. Read more.

Utilisation-focused evaluation approach

The Utilisation-focused Evaluation approach is an effective tool for monitoring and evaluation users. It is a user-oriented approach that focuses on the utilisation of evaluation results by intended users and stakeholders. This approach encourages users to be actively involved in the evaluation process, from planning to implementation to reporting. It enables users to assess the impact of the evaluation results on their decision-making and practice. The Utilisation-focused Evaluation approach also encourages users to use the results for further improvement and refinement of their strategies and practices. This approach helps users to identify areas for improvement and to develop strategies to address them. In addition, it helps users to determine the most effective ways to use the evaluation results in order to achieve their desired outcomes. Read more.

Monitoring and Evaluation (M&E) for learning is an approach that prioritizes learning and program improvement, as opposed to solely focusing on accountability and reporting to external stakeholders. It is an iterative process that involves continuous monitoring, feedback, and reflection to enable learning and adaptation. By engaging stakeholders in the evaluation process, M&E for learning can identify strengths, weaknesses, and areas for improvement, and use this information to guide program design and implementation. Ultimately, the goal of M&E for learning is to create a culture of continuous learning within organizations, where learning and adaptation are integrated into every aspect of program design and implementation. Read more .

Gender Responsive Evaluation

A gender-responsive evaluation is an approach to understanding the impacts of a project, policy, or program on women, men and gender diverse populations. It is a valuable tool to assess how different gender groups are affected by a particular project, as well as how to ensure that the project meets its objectives in a way that is equitable and beneficial to all genders. Gender-responsive evaluations also provide useful information on how different gender groups interact and participate in projects or policies, which can help identify any potential inequities in access or outcomes.  Read more .

The case study evaluation approach is a powerful tool for monitoring and evaluating the success of a program or initiative. It allows researchers to look at the impact of a program from multiple perspectives, including the behavior of participants and the effectiveness of interventions. By using a case study evaluation approach, researchers can develop a comprehensive picture of the program’s strengths and weaknesses, identify areas for improvement, and make recommendations for future action. This approach is particularly useful for programs that involve multiple stakeholders, as it allows for the examination of both individual and collective outcomes. Furthermore, it is a valuable tool for assessing the program’s effectiveness over time, as it enables researchers to compare the results of different interventions and track changes in program outcomes. Read more.

This approach focuses on how a project is implemented, rather than the outcomes. It can help identify problems in project implementation, such as delays or budget overruns, and make recommendations for improvement. The process monitoring and evaluation approach is a systematic way of tracking and assessing the progress of a project or program. It involves regularly collecting, analyzing, and interpreting data to determine the effectiveness of a program and to identify areas for improvement. Monitoring and evaluation are two distinct but related functions. Monitoring is the continuous collection of information to track the progress of a program or project over time. Evaluation, on the other hand, is the periodic assessment of a program or project to determine its effectiveness and impact. The process monitoring and evaluation approach provides a comprehensive understanding of the program’s strengths and weaknesses, enabling decision-makers to make informed decisions about how to improve the program and ensure its success. Read more.

This approach involves assessing the causal impact of a project on its beneficiaries or the wider community. It can help determine whether a project has achieved its intended outcomes and whether the benefits outweigh the costs. The impact evaluation approach is a monitoring and evaluation technique used to assess the outcomes of a program or intervention. This approach helps to identify the changes that have occurred due to the program or intervention and measure the effectiveness of the program. It is used to evaluate the impact of the program on the target population, such as whether the program has achieved its desired objectives. The impact evaluation approach helps to identify areas of improvement and assess the cost-effectiveness of the program. It also helps to determine whether the program has met its goals and objectives, and if not, what changes should be made in order to achieve the desired results. This approach is a valuable tool for organizations to assess the success of their programs and interventions. Read more.

Evaluation Approaches versus Evaluation Methods

Evaluation Approaches versus Evaluation Methods

Evaluation approaches and evaluation methods are both used to assess the effectiveness and impact of programs, policies, or interventions. However, they refer to different aspects of the evaluation process.

Evaluation approaches refer to the overall framework or perspective that guides the evaluation. They define the philosophical, theoretical, and methodological principles that underpin the evaluation.

Evaluation methods, on the other hand, are the specific techniques and tools used to collect and analyze data to evaluate the program. Methods can be quantitative (e.g., surveys, experiments, statistical analysis) or qualitative (e.g., interviews, focus groups, content analysis), and may vary depending on the evaluation approach used.

In summary, evaluation approaches define the overall framework and principles that guide the evaluation, while evaluation methods are the specific techniques and tools used to collect and analyze data to evaluate the program.

An effective monitoring and evaluation approach can help to identify whether an organization’s goals are being achieved in a timely manner.

Overall, organizations can use one or more of these approaches to monitoring and evaluation, depending on the needs of their project and the resources available to them. Although there are many different types of monitoring and evaluation approaches available, they all share the same goal – to understand the impact of an organization’s programs and projects on its stakeholders.

' data-src=

This is so detailed and simple to understand. Thanks EvalCommunity for your contribution towards monitoring and evaluation. I always love your resources, thank you!

' data-src=

Khadar Mahad

I Comment is Only To Say You Thanks How To Prepare In This Lesson

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

case study approach evaluation

Jobs for You

Energy/environment analyst, senior environmental programs advisor, manager, impact and financial management.

  • Emergent Climate

Information Coordinator – USAID Guatemala Planning and Program Support Office

  • United States (Remote)

Director of Market Influence

  • Atlanta, GA, USA
  • Habitat for Humanity International

CLA Coordinator/Report Officer

  • Bosnia and Herzegovina

Director of Collaborating, Learning, and Adapting (CLA) – Bosnia and Herzegovina

Senior transition and closeout consultant, global health technical and mission support (gh-tams), intern – pricing and budget.

  • United States

Research Technical Advisor

  • South Bend, IN, USA (Remote)
  • University of Notre Dame

YMELP II Short-Term Technical Assistance (STTA)

Water, sanitation and hygiene advisor (wash) – usaid/drc.

  • Democratic Republic of the Congo

Health Supply Chain Specialist – USAID/DRC

Chief of party – bosnia and herzegovina, project manager i, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

Case Study Research Method in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Case Study Evaluation: Past, Present and Future Challenges: Volume 15

Table of contents, case study evaluation: past, present and future challenges, advances in program evaluation, copyright page, list of contributors, introduction, case study, methodology and educational evaluation: a personal view.

This chapter gives one version of the recent history of evaluation case study. It looks back over the emergence of case study as a sociological method, developed in the early years of the 20th Century and celebrated and elaborated by the Chicago School of urban sociology at Chicago University, starting throughout the 1920s and 1930s. Some of the basic methods, including constant comparison, were generated at that time. Only partly influenced by this methodological movement, an alliance between an Illinois-based team in the United States and a team at the University of East Anglia in the United Kingdom recast the case method as a key tool for the evaluation of social and educational programmes.

Letters from a Headmaster ☆ Originally published in Simons, H. (Ed.) (1980). Towards a Science of the Singular: Essays about Case Study in Educational Research and Evaluation. Occasional Papers No. 10. Norwich, UK: Centre for Applied Research, University of East Anglia.

Story telling and educational understanding ☆ previously published in occasional papers #12, evaluation centre, university of western michigan, 1978..

The full ‘storytelling’ paper was written in 1978 and was influential in its time. It is reprinted here, introduced by an Author's reflection on it in 2014. The chapter describes the author’s early disenchantment with traditional approaches to educational research.

He regards educational research as, at best, a misnomer, since little of it is preceded by a search . Entitled educational researchers often fancy themselves as scientists at work. But those whom they attempt to describe are often artists at work. Statistical methodologies enable educational researchers to measure something, but their measurements can neither capture nor explain splendid teaching.

Since such a tiny fraction of what is published in educational research journals influences school practitioners, professional researchers should risk trying alternative approaches to uncovering what is going on in schools.

Story telling is posited as a possible key to producing insights that inform and ultimately improve educational practice. It advocates openness to broad inquiry into the culture of the educational setting.

Case Study as Antidote to the Literal

Much programme and policy evaluation yields to the pressure to report on the productivity of programmes and is perforce compliant with the conditions of contract. Too often the view of these evaluations is limited to a literal reading of the analytical challenge. If we are evaluating X we look critically at X1, X2 and X3. There might be cause for embracing adjoining data sources such as W1 and Y1. This ignores frequent realities that an evaluation specification is only an approximate starting point for an unpredictable journey into comprehensive understanding; that the specification represents only that which is wanted by the sponsor, and not all that may be needed ; and that the contractual specification too often insists on privileging the questions and concerns of a few. Case study evaluation proves an alternative that allows for the less-than-literal in the form of analysis of contingencies – how people, phenomena and events may be related in dynamic ways, how context and action have only a blurred dividing line and how what defines the case as a case may only emerge late in the study.

Thinking about Case Studies in 3-D: Researching the NHS Clinical Commissioning Landscape in England

What is our unit of analysis and by implication what are the boundaries of our cases? This is a question we grapple with at the start of every new project. We observe that case studies are often referred to in an unreflective manner and are often conflated with geographical location. Neat units of analysis and clearly bounded cases usually do not reflect the messiness encountered during qualitative fieldwork. Others have puzzled over these questions. We briefly discuss work to problematise the use of households as units of analysis in the context of apartheid South Africa and then consider work of other anthropologists engaged in multi-site ethnography. We have found the notion of ‘following’ chains, paths and threads across sites to be particularly insightful.

We present two examples from our work studying commissioning in the English National Health Service (NHS) to illustrate our struggles with case studies. The first is a study of Practice-based Commissioning groups and the second is a study of the early workings of Clinical Commissioning Groups. In both instances we show how ideas of what constituted our unit of analysis and the boundaries of our cases became less clear as our research progressed. We also discuss pressures we experienced to add more case studies to our projects. These examples illustrate the primacy for us of understanding interactions between place, local history and rapidly developing policy initiatives. Understanding cases in this way can be challenging in a context where research funders hold different views of what constitutes a case.

The Case for Evaluating Process and Worth: Evaluation of a Programme for Carers and People with Dementia

A case study methodology was applied as a major component of a mixed-methods approach to the evaluation of a mobile dementia education and support service in the Bega Valley Shire, New South Wales, Australia. In-depth interviews with people with dementia (PWD), their carers, programme staff, family members and service providers and document analysis including analysis of client case notes and client database were used.

The strengths of the case study approach included: (i) simultaneous evaluation of programme process and worth, (ii) eliciting the theory of change and addressing the problem of attribution, (iii) demonstrating the impact of the programme on earlier steps identified along the causal pathway (iv) understanding the complexity of confounding factors, (v) eliciting the critical role of the social, cultural and political context, (vi) understanding the importance of influences contributing to differences in programme impact for different participants and (vii) providing insight into how programme participants experience the value of the programme including unintended benefits.

The broader case of the collective experience of dementia and as part of this experience, the impact of a mobile programme of support and education, in a predominately rural area grew from the investigation of the programme experience of ‘individual cases’ of carers and PWD. Investigation of living conditions, relationships, service interactions through observation and increased depth of interviews with service providers and family members would have provided valuable perspectives and thicker description of the case for increased understanding of the case and strength of the evaluation.

The Collapse of “Primary Care” in Medical Education: A Case Study of Michigan’s Community/University Health Partnerships Project

This chapter describes a case study of a social change project in medical education (primary care), in which the critical interpretive evaluation methodology I sought to use came up against the “positivist” approach preferred by senior figures in the medical school who commissioned the evaluation.

I describe the background to the study and justify the evaluation approach and methods employed in the case study – drawing on interviews, document analysis, survey research, participant observation, literature reviews, and critical incidents – one of which was the decision by the medical school hierarchy to restrict my contact with the lay community in my official evaluation duties. The use of critical ethnography also embraced wider questions about circuits of power and the social and political contexts within which the “social change” effort occurred.

Central to my analysis is John Gaventa’s theory of power as “the internalization of values that inhibit consciousness and participation while encouraging powerlessness and dependency.” Gaventa argued, essentially, that the evocation of power has as much to do with preventing decisions as with bringing them about. My chosen case illustrated all three dimensions of power that Gaventa originally uncovered in his portrait of self-interested Appalachian coal mine owners: (1) communities were largely excluded from decision making power; (2) issues were avoided or suppressed; and (3) the interests of the oppressed went largely unrecognized.

The account is auto-ethnographic, hence the study is limited by my abilities, biases, and subject positions. I reflect on these in the chapter.

The study not only illustrates the unique contribution of case study as a research methodology but also its low status in the positivist paradigm adhered to by many doctors. Indeed, the tension between the potential of case study to illuminate the complexities of community engagement through thick description and the rejection of this very method as inherently “flawed” suggests that medical education may be doomed to its neoliberal fate for some time to come.

‘Lead’ Standard Evaluation

This is a personal narrative, but I trust not a self-regarding one. For more years than I care to remember I have been working in the field of curriculum (or ‘program’) evaluation. The field by any standards is dispersed and fragmented, with variously ascribed purposes, roles, implicit values, political contexts, and social research methods. Attempts to organize this territory into an ‘evaluation theory tree’ (e.g. Alkin, M., & Christie, C. (2003). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences (pp. 12–65). Thousand Oaks, CA: Sage) have identified broad types or ‘branches’, but the migration of specific characteristics (like ‘case study’) or individual practitioners across the boundaries has tended to undermine the analysis at the level of detail, and there is no suggestion that it represents a cladistic taxonomy. There is, however, general agreement that the roots of evaluation practice tap into a variety of cultural sources, being grounded bureaucratically in (potentially conflicting) doctrines of accountability and methodologically in discipline-based or pragmatically eclectic formats for systematic social enquiry.

In general, this diversity is not treated as problematic. The professional evaluation community has increasingly taken the view (‘let all the flowers grow’) that evaluation models can be deemed appropriate across a wide spectrum, with their appropriateness determined by the nature of the task and its context, including in relation to hybrid studies using mixed models or displaying what Geertz (Geertz, C. (1980/1993). Blurred genres: The refiguration of social thought. The American Scholar , 49(2), 165–179) called ‘blurred genres’. However, from time to time historic tribal rivalries re-emerge as particular practitioners feel the need to defend their modus operandi (and thereby their livelihood) against paradigm shifts or governments and other sponsors of program evaluation seeking for ideological reasons to prioritize certain types of study at the expense of others. The latter possibility poses a potential threat that needs to be taken seriously by evaluators within the broad tradition showcased in this volume, interpretive qualitative case studies of educational programs that combine naturalistic description (often ‘thick’; Geertz, C. (1973). Thick description: Towards an interpretive theory of culture. In The interpretation of culture (pp. 3–30). New York, NY: Basic Books.) description with a values-orientated analysis of their implications. Such studies are more likely to seek inspiration from anthropology or critical discourse analysis than from the randomly controlled trials familiar in medical research or laboratory practice in the physical sciences, despite the impressive rigour of the latter in appropriate contexts. It is the risk of ideological allegiance that I address in this chapter.

Freedom from the Rubric

Twice-told tales how public inquiry could inform n of 1 case study research.

This chapter considers the usefulness and validity of public inquiries as a source of data and preliminary interpretation for case study research. Using two contrasting examples – the Bristol Inquiry into excess deaths in a children’s cardiac surgery unit and the Woolf Inquiry into a breakdown of governance at the London School of Economics (LSE) – I show how academics can draw fruitfully on, and develop further analysis from, the raw datasets, published summaries and formal judgements of public inquiries.

Academic analysis of public inquiries can take two broad forms, corresponding to the two main approaches to individual case study defined by Stake: instrumental (selecting the public inquiry on the basis of pre-defined theoretical features and using the material to develop and test theoretical propositions) and intrinsic (selecting the public inquiry on the basis of the particular topic addressed and using the material to explore questions about what was going on and why).

The advantages of a public inquiry as a data source for case study research typically include a clear and uncontested focus of inquiry; the breadth and richness of the dataset collected; the exceptional level of support available for the tasks of transcribing, indexing, collating, summarising and so on; and the expert interpretations and insights of the inquiry’s chair (with which the researcher may or may not agree). A significant disadvantage is that whilst the dataset collected for a public inquiry is typically ‘rich’, it has usually been collected under far from ideal research conditions. Hence, while public inquiries provide a potentially rich resource for researchers, those who seek to use public inquiry data for research must justify their choice on both ethical and scientific grounds.

Evaluation as the Co-Construction of Knowledge: Case Studies of Place-Based Leadership and Public Service Innovation

This chapter introduces the notion of the ‘Innovation Story’ as a methodological approach to public policy evaluation, which builds in greater opportunity for learning and reflexivity.

The Innovation Story is an adaptation of the case study approach and draws on participatory action research traditions. It is a structured narrative that describes a particular public policy innovation in the personalised contexts in which it is experienced by innovators. Its construction involves a discursive process through which involved actors tell their story, explain it to others, listen to their questions and co-construct knowledge of change together.

The approach was employed to elaborate five case studies of place-based leadership and public service innovation in the United Kingdom, The Netherlands and Mexico. The key findings are that spaces in which civic leaders come together from different ‘realms’ of leadership in a locality (community, business, professional managers and political leaders) can become innovation zones that foster inventive behaviour. Much depends on the quality of civic leadership, and its capacity to foster genuine dialogue and co-responsibility. This involves the evaluation seeking out influential ideas from below the level of strategic management, and documenting leadership activities of those who are skilled at ‘boundary crossing’ – for example, communicating between sectors.

The evaluator can be a key player in this process, as a convenor of safe spaces for actors to come together to discuss and deliberate before returning to practice. Our approach therefore argues for a particular awareness of the political nature of policy evaluation in terms of negotiating these spaces, and the need for politically engaged evaluators who are skilled in facilitating collective learning processes.

Evaluation Noir: The Other Side of the Experience

What are the boundaries of a case study, and what should new evaluators do when these boundaries are breached? How does a new evaluator interpret the breakdown of communication, how do new evaluators protect themselves when the evaluation fails? This chapter discusses the journey of an evaluator new to the field of qualitative evaluative inquiry. Integrating the perspective of a senior evaluator, the authors reflect on three key experiences that informed the new evaluator. The authors hope to provide a rare insight into case study practice as emotional issues turn out to be just as complex as the methodology used.

About the Editors

About the authors.

  • Jill Russell
  • Trisha Greenhalgh
  • Saville Kushner

We’re listening — tell us what you think

Something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

Modelling environmental life cycle performance of alternative marine power configurations with an integrated experimental assessment approach: A case study of an inland passenger barge

Affiliations.

There is pressure on the global shipping industry to move towards greener propulsion and fuel technologies to reduce greenhouse gas emissions. Hydrogen and electricity are both recognised as pathways to achieve a net-zero. However, in the evaluation of the environmental performance of these alternative marine power configurations, conventional life cycle assessment (LCA) methods have limitations reflecting the varied nature of ship design and operational modes. The integration of LCA with experimental assessment could remedy the shortcoming of conventional approaches to data generation. The system energy demand data in this study was generated based on specific ship design and directly fed into life cycle assessment. To demonstrate the effectiveness and potential the approach was applied to a case study of inland waterway vessel. Suitable hybrid PV/electricity/diesel and hydrogen powered fuel cell systems for the case vessel were modelled; and hydrodynamic testing and dynamic system simulation was undertaken to provide ship performance data under various operational/environmental profiles. Lifecycle assessment (LCA) indicated hydrogen and electrical propulsion technologies have the potential for 85.7 % and 56.2 % emissions reduction against an MGO base case, respectively. The results highlight that implementation of both technologies is highly dependent on energy production pathways. Hydrogen systems reliant on fossil feedstocks risk an increase in emissions of up to 6.3 % against the MGO base case. Sensitivity analysis indicated an electrical system with electricity production from 79.5 % renewables could achieve savings of 82.2 % in GHG emissions compared to the MGO base case. Crucially, the results demonstrate a further development of the LCA approach which can enable a more accurate environmental performance evaluation of alternative marine power configurations considering specific ship design and operational characteristics. Ultimately this addition makes the results more meaningful for commercial operations and decision making in the selection of alternative marine power systems to support the transition to net-zero.

Keywords: Electric; Hydrogen; Life cycle assessment (LCA); Maritime decarbonisation; Propulsion; Zero‑carbon.

Copyright © 2024. Published by Elsevier B.V.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of bmcmedicine

Case study research for better evaluations of complex interventions: rationale and challenges

Sara paparini.

1 Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Judith Green

2 Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Chrysanthi Papoutsi

Jamie murdoch.

3 School of Health Sciences, University of East Anglia, Norwich, UK

Mark Petticrew

4 Public Health, Environments and Society, London School of Hygiene & Tropical Medicin, London, UK

Trish Greenhalgh

Benjamin hanckel.

5 Institute for Culture and Society, Western Sydney University, Penrith, Australia

Associated Data

Not applicable (article based on existing available academic publications)

The need for better methods for evaluation in health research has been widely recognised. The ‘complexity turn’ has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link interventions and outcomes. We argue that case study research—currently denigrated as poor evidence—is an under-utilised resource for not only providing evidence about context and transferability, but also for helping strengthen causal inferences when pathways between intervention and effects are likely to be non-linear.

Case study research, as an overall approach, is based on in-depth explorations of complex phenomena in their natural, or real-life, settings. Empirical case studies typically enable dynamic understanding of complex challenges and provide evidence about causal mechanisms and the necessary and sufficient conditions (contexts) for intervention implementation and effects. This is essential evidence not just for researchers concerned about internal and external validity, but also research users in policy and practice who need to know what the likely effects of complex programmes or interventions will be in their settings. The health sciences have much to learn from scholarship on case study methodology in the social sciences. However, there are multiple challenges in fully exploiting the potential learning from case study research. First are misconceptions that case study research can only provide exploratory or descriptive evidence. Second, there is little consensus about what a case study is, and considerable diversity in how empirical case studies are conducted and reported. Finally, as case study researchers typically (and appropriately) focus on thick description (that captures contextual detail), it can be challenging to identify the key messages related to intervention evaluation from case study reports.

Whilst the diversity of published case studies in health services and public health research is rich and productive, we recommend further clarity and specific methodological guidance for those reporting case study research for evaluation audiences.

The need for methodological development to address the most urgent challenges in health research has been well-documented. Many of the most pressing questions for public health research, where the focus is on system-level determinants [ 1 , 2 ], and for health services research, where provisions typically vary across sites and are provided through interlocking networks of services [ 3 ], require methodological approaches that can attend to complexity. The need for methodological advance has arisen, in part, as a result of the diminishing returns from randomised controlled trials (RCTs) where they have been used to answer questions about the effects of interventions in complex systems [ 4 – 6 ]. In conditions of complexity, there is limited value in maintaining the current orientation to experimental trial designs in the health sciences as providing ‘gold standard’ evidence of effect.

There are increasing calls for methodological pluralism [ 7 , 8 ], with the recognition that complex intervention and context are not easily or usefully separated (as is often the situation when using trial design), and that system interruptions may have effects that are not reducible to linear causal pathways between intervention and outcome. These calls are reflected in a shifting and contested discourse of trial design, seen with the emergence of realist [ 9 ], adaptive and hybrid (types 1, 2 and 3) [ 10 , 11 ] trials that blend studies of effectiveness with a close consideration of the contexts of implementation. Similarly, process evaluation has now become a core component of complex healthcare intervention trials, reflected in MRC guidance on how to explore implementation, causal mechanisms and context [ 12 ].

Evidence about the context of an intervention is crucial for questions of external validity. As Woolcock [ 4 ] notes, even if RCT designs are accepted as robust for maximising internal validity, questions of transferability (how well the intervention works in different contexts) and generalisability (how well the intervention can be scaled up) remain unanswered [ 5 , 13 ]. For research evidence to have impact on policy and systems organisation, and thus to improve population and patient health, there is an urgent need for better methods for strengthening external validity, including a better understanding of the relationship between intervention and context [ 14 ].

Policymakers, healthcare commissioners and other research users require credible evidence of relevance to their settings and populations [ 15 ], to perform what Rosengarten and Savransky [ 16 ] call ‘careful abstraction’ to the locales that matter for them. They also require robust evidence for understanding complex causal pathways. Case study research, currently under-utilised in public health and health services evaluation, can offer considerable potential for strengthening faith in both external and internal validity. For example, in an empirical case study of how the policy of free bus travel had specific health effects in London, UK, a quasi-experimental evaluation (led by JG) identified how important aspects of context (a good public transport system) and intervention (that it was universal) were necessary conditions for the observed effects, thus providing useful, actionable evidence for decision-makers in other contexts [ 17 ].

The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or ‘real-life’, settings. Empirical case studies typically enable dynamic understanding of complex challenges rather than restricting the focus on narrow problem delineations and simple fixes. Case study research is a diverse and somewhat contested field, with multiple definitions and perspectives grounded in different ways of viewing the world, and involving different combinations of methods. In this paper, we raise awareness of such plurality and highlight the contribution that case study research can make to the evaluation of complex system-level interventions. We review some of the challenges in exploiting the current evidence base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health sciences, can enhance the take-up of evidence from case study research.

Case study research offers evidence about context, causal inference in complex systems and implementation

Well-conducted and described empirical case studies provide evidence on context, complexity and mechanisms for understanding how, where and why interventions have their observed effects. Recognition of the importance of context for understanding the relationships between interventions and outcomes is hardly new. In 1943, Canguilhem berated an over-reliance on experimental designs for determining universal physiological laws: ‘As if one could determine a phenomenon’s essence apart from its conditions! As if conditions were a mask or frame which changed neither the face nor the picture!’ ([ 18 ] p126). More recently, a concern with context has been expressed in health systems and public health research as part of what has been called the ‘complexity turn’ [ 1 ]: a recognition that many of the most enduring challenges for developing an evidence base require a consideration of system-level effects [ 1 ] and the conceptualisation of interventions as interruptions in systems [ 19 ].

The case study approach is widely recognised as offering an invaluable resource for understanding the dynamic and evolving influence of context on complex, system-level interventions [ 20 – 23 ]. Empirically, case studies can directly inform assessments of where, when, how and for whom interventions might be successfully implemented, by helping to specify the necessary and sufficient conditions under which interventions might have effects and to consolidate learning on how interdependencies, emergence and unpredictability can be managed to achieve and sustain desired effects. Case study research has the potential to address four objectives for improving research and reporting of context recently set out by guidance on taking account of context in population health research [ 24 ], that is to (1) improve the appropriateness of intervention development for specific contexts, (2) improve understanding of ‘how’ interventions work, (3) better understand how and why impacts vary across contexts and (4) ensure reports of intervention studies are most useful for decision-makers and researchers.

However, evaluations of complex healthcare interventions have arguably not exploited the full potential of case study research and can learn much from other disciplines. For evaluative research, exploratory case studies have had a traditional role of providing data on ‘process’, or initial ‘hypothesis-generating’ scoping, but might also have an increasing salience for explanatory aims. Across the social and political sciences, different kinds of case studies are undertaken to meet diverse aims (description, exploration or explanation) and across different scales (from small N qualitative studies that aim to elucidate processes, or provide thick description, to more systematic techniques designed for medium-to-large N cases).

Case studies with explanatory aims vary in terms of their positioning within mixed-methods projects, with designs including (but not restricted to) (1) single N of 1 studies of interventions in specific contexts, where the overall design is a case study that may incorporate one or more (randomised or not) comparisons over time and between variables within the case; (2) a series of cases conducted or synthesised to provide explanation from variations between cases; and (3) case studies of particular settings within RCT or quasi-experimental designs to explore variation in effects or implementation.

Detailed qualitative research (typically done as ‘case studies’ within process evaluations) provides evidence for the plausibility of mechanisms [ 25 ], offering theoretical generalisations for how interventions may function under different conditions. Although RCT designs reduce many threats to internal validity, the mechanisms of effect remain opaque, particularly when the causal pathways between ‘intervention’ and ‘effect’ are long and potentially non-linear: case study research has a more fundamental role here, in providing detailed observational evidence for causal claims [ 26 ] as well as producing a rich, nuanced picture of tensions and multiple perspectives [ 8 ].

Longitudinal or cross-case analysis may be best suited for evidence generation in system-level evaluative research. Turner [ 27 ], for instance, reflecting on the complex processes in major system change, has argued for the need for methods that integrate learning across cases, to develop theoretical knowledge that would enable inferences beyond the single case, and to develop generalisable theory about organisational and structural change in health systems. Qualitative Comparative Analysis (QCA) [ 28 ] is one such formal method for deriving causal claims, using set theory mathematics to integrate data from empirical case studies to answer questions about the configurations of causal pathways linking conditions to outcomes [ 29 , 30 ].

Nonetheless, the single N case study, too, provides opportunities for theoretical development [ 31 ], and theoretical generalisation or analytical refinement [ 32 ]. How ‘the case’ and ‘context’ are conceptualised is crucial here. Findings from the single case may seem to be confined to its intrinsic particularities in a specific and distinct context [ 33 ]. However, if such context is viewed as exemplifying wider social and political forces, the single case can be ‘telling’, rather than ‘typical’, and offer insight into a wider issue [ 34 ]. Internal comparisons within the case can offer rich possibilities for logical inferences about causation [ 17 ]. Further, case studies of any size can be used for theory testing through refutation [ 22 ]. The potential lies, then, in utilising the strengths and plurality of case study to support theory-driven research within different methodological paradigms.

Evaluation research in health has much to learn from a range of social sciences where case study methodology has been used to develop various kinds of causal inference. For instance, Gerring [ 35 ] expands on the within-case variations utilised to make causal claims. For Gerring [ 35 ], case studies come into their own with regard to invariant or strong causal claims (such as X is a necessary and/or sufficient condition for Y) rather than for probabilistic causal claims. For the latter (where experimental methods might have an advantage in estimating effect sizes), case studies offer evidence on mechanisms: from observations of X affecting Y, from process tracing or from pattern matching. Case studies also support the study of emergent causation, that is, the multiple interacting properties that account for particular and unexpected outcomes in complex systems, such as in healthcare [ 8 ].

Finally, efficacy (or beliefs about efficacy) is not the only contributor to intervention uptake, with a range of organisational and policy contingencies affecting whether an intervention is likely to be rolled out in practice. Case study research is, therefore, invaluable for learning about contextual contingencies and identifying the conditions necessary for interventions to become normalised (i.e. implemented routinely) in practice [ 36 ].

The challenges in exploiting evidence from case study research

At present, there are significant challenges in exploiting the benefits of case study research in evaluative health research, which relate to status, definition and reporting. Case study research has been marginalised at the bottom of an evidence hierarchy, seen to offer little by way of explanatory power, if nonetheless useful for adding descriptive data on process or providing useful illustrations for policymakers [ 37 ]. This is an opportune moment to revisit this low status. As health researchers are increasingly charged with evaluating ‘natural experiments’—the use of face masks in the response to the COVID-19 pandemic being a recent example [ 38 ]—rather than interventions that take place in settings that can be controlled, research approaches using methods to strengthen causal inference that does not require randomisation become more relevant.

A second challenge for improving the use of case study evidence in evaluative health research is that, as we have seen, what is meant by ‘case study’ varies widely, not only across but also within disciplines. There is indeed little consensus amongst methodologists as to how to define ‘a case study’. Definitions focus, variously, on small sample size or lack of control over the intervention (e.g. [ 39 ] p194), on in-depth study and context [ 40 , 41 ], on the logic of inference used [ 35 ] or on distinct research strategies which incorporate a number of methods to address questions of ‘how’ and ‘why’ [ 42 ]. Moreover, definitions developed for specific disciplines do not capture the range of ways in which case study research is carried out across disciplines. Multiple definitions of case study reflect the richness and diversity of the approach. However, evidence suggests that a lack of consensus across methodologists results in some of the limitations of published reports of empirical case studies [ 43 , 44 ]. Hyett and colleagues [ 43 ], for instance, reviewing reports in qualitative journals, found little match between methodological definitions of case study research and how authors used the term.

This raises the third challenge we identify that case study reports are typically not written in ways that are accessible or useful for the evaluation research community and policymakers. Case studies may not appear in journals widely read by those in the health sciences, either because space constraints preclude the reporting of rich, thick descriptions, or because of the reported lack of willingness of some biomedical journals to publish research that uses qualitative methods [ 45 ], signalling the persistence of the aforementioned evidence hierarchy. Where they do, however, the term ‘case study’ is used to indicate, interchangeably, a qualitative study, an N of 1 sample, or a multi-method, in-depth analysis of one example from a population of phenomena. Definitions of what constitutes the ‘case’ are frequently lacking and appear to be used as a synonym for the settings in which the research is conducted. Despite offering insights for evaluation, the primary aims may not have been evaluative, so the implications may not be explicitly drawn out. Indeed, some case study reports might properly be aiming for thick description without necessarily seeking to inform about context or causality.

Acknowledging plurality and developing guidance

We recognise that definitional and methodological plurality is not only inevitable, but also a necessary and creative reflection of the very different epistemological and disciplinary origins of health researchers, and the aims they have in doing and reporting case study research. Indeed, to provide some clarity, Thomas [ 46 ] has suggested a typology of subject/purpose/approach/process for classifying aims (e.g. evaluative or exploratory), sample rationale and selection and methods for data generation of case studies. We also recognise that the diversity of methods used in case study research, and the necessary focus on narrative reporting, does not lend itself to straightforward development of formal quality or reporting criteria.

Existing checklists for reporting case study research from the social sciences—for example Lincoln and Guba’s [ 47 ] and Stake’s [ 33 ]—are primarily orientated to the quality of narrative produced, and the extent to which they encapsulate thick description, rather than the more pragmatic issues of implications for intervention effects. Those designed for clinical settings, such as the CARE (CAse REports) guidelines, provide specific reporting guidelines for medical case reports about single, or small groups of patients [ 48 ], not for case study research.

The Design of Case Study Research in Health Care (DESCARTE) model [ 44 ] suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and robustness of qualitative and mixed-methods research reporting, and it is usefully open-ended and non-prescriptive. However, even if it does include some reflections on context, the model does not fully address aspects of context, logic and causal inference that are perhaps most relevant for evaluative research in health.

Hence, for evaluative research where the aim is to report empirical findings in ways that are intended to be pragmatically useful for health policy and practice, this may be an opportune time to consider how to best navigate plurality around what is (minimally) important to report when publishing empirical case studies, especially with regards to the complex relationships between context and interventions, information that case study research is well placed to provide.

The conventional scientific quest for certainty, predictability and linear causality (maximised in RCT designs) has to be augmented by the study of uncertainty, unpredictability and emergent causality [ 8 ] in complex systems. This will require methodological pluralism, and openness to broadening the evidence base to better understand both causality in and the transferability of system change intervention [ 14 , 20 , 23 , 25 ]. Case study research evidence is essential, yet is currently under exploited in the health sciences. If evaluative health research is to move beyond the current impasse on methods for understanding interventions as interruptions in complex systems, we need to consider in more detail how researchers can conduct and report empirical case studies which do aim to elucidate the contextual factors which interact with interventions to produce particular effects. To this end, supported by the UK’s Medical Research Council, we are embracing the challenge to develop guidance for case study researchers studying complex interventions. Following a meta-narrative review of the literature, we are planning a Delphi study to inform guidance that will, at minimum, cover the value of case study research for evaluating the interrelationship between context and complex system-level interventions; for situating and defining ‘the case’, and generalising from case studies; as well as provide specific guidance on conducting, analysing and reporting case study research. Our hope is that such guidance can support researchers evaluating interventions in complex systems to better exploit the diversity and richness of case study research.

Acknowledgements

Not applicable

Abbreviations

Authors’ contributions.

JG, MP, SP, JM, TG, CP and SS drafted the initial paper; all authors contributed to the drafting of the final version, and read and approved the final manuscript.

This work was funded by the Medical Research Council - MRC Award MR/S014632/1 HCS: Case study, Context and Complex interventions (TRIPLE C). SP was additionally funded by the University of Oxford's Higher Education Innovation Fund (HEIF).

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Evaluation and Comparison of Rock Bolting Versus Steel Arch Support Systems in Thick Coal Seam Underground Galleries: A Case Study

  • Open access
  • Published: 05 June 2024

Cite this article

You have full access to this open access article

case study approach evaluation

  • Mehmet Mesutoglu   ORCID: orcid.org/0000-0002-5243-3962 1 &
  • Ihsan Ozkan   ORCID: orcid.org/0000-0002-8268-3188 1  

This study investigates the feasibility of rock bolting support in an underground coal mine gallery with a thick coal seam. The Ömerler underground coal mine working area, owned by the West Lignite Enterprise (GLI) of the Turkish Coal Enterprises (TKI), was selected for this purpose. Longwall top coal caving (LTCC) is implemented as the production method in the Ömerler underground coal mine. Field and laboratory studies were conducted to determine rock mass and rock material properties, followed by experimental, empirical, and numerical analyses based on the acquired data. The obtained design results were evaluated using a resin-grouted rebar (RBR) rock bolting system and steel arch support (SAS) pilot application areas. The numerical modeling results conducted using the Fast Lagrangian Analysis of Continua 3D (FLAC 3D) (v6.0) program indicated less displacement and secondary stress change in the RBR-supported zone compared to the SAS-supported zone. In situ measurements also demonstrated that RBR provided more successful support to the roof during coal production activities. The findings suggest that RBR is a more effective solution when evaluating the feasibility of rock bolting support systems in underground galleries with thick coal seams at the Ömerler underground coal mine. This study emphasizes the importance of more sustainable and safe support systems to enhance operational efficiency in the coal mining industry.

Avoid common mistakes on your manuscript.

1 Introduction

Coal, a significant component of global energy production, continues to play a crucial role in the world economy [ 1 , 2 , 3 , 4 ]. In 2022, global coal demand reached its highest level to date [ 5 ]. While this trend persists, there is a need to enhance efficiency and undertake improvement efforts in coal mining to contribute to the overarching goal of energy sustainability. The rise in global coal demand is causing a depletion of open-pit reserves where coal production takes place, leading to a transition towards deeper underground mining operations [ 6 ]. This transition involves creating galleries for underground coal production, with a particular focus on faster and more dependable support systems, to reinforce sustainability and operational efficiency within the coal mining industry.

In recent decades, underground coal mines have adopted mechanized excavation systems, particularly employing the longwall mining technique, to facilitate high-volume coal production [ 7 , 8 , 9 , 10 ]. However, this implementation has resulted in the formation of numerous mine galleries. The traditional roof support system utilizing steel sets in these galleries can adversely impact their daily advancement rates, unit costs, and safety, especially in rock mass environments characterized by very weak, weak, and moderately strong strengths [ 11 , 12 ].

Over the past four decades, there have been notable advancements in rock bolting support systems, driven by improved insights into load transfer mechanisms and the evolution of rock bolting technology [ 13 , 14 , 15 , 16 ].

Distinguished from traditional support systems like steel sets, this system facilitates faster advancement rates, reduced unit costs, and enhanced safety through its active support capabilities. In contrast to passive support systems such as steel sets, rock bolts engage more rapidly with the rock mass to initiate their supporting function, resulting in less deformation and facilitating safer and swifter gallery advancement [ 17 ]. Consequently, it has been successfully implemented as the primary supporting element in underground coal mines across various regions globally [ 18 , 19 ].

In underground coal production activities, especially in thick coal seams, detailed preliminary design studies should be conducted for the proper reinforcement of gateroads with rock bolting. The empirical approaches, commonly used in project works today, are developed by combining observations, measurements, experience, engineering intuition, and judgments. These approaches can provide predictions for support parameters such as rock bolt length and spacing, dimensions and intervals of steel sets, and thickness of shotcrete for the stability of underground openings. Eleven different empirical approaches can provide design outputs for rock bolting support. It is known that six of these approaches are based on the widely used RMR rock mass classification system [ 20 , 21 , 22 , 23 , 24 , 25 ], two are based on the widely used Q rock mass classification system [ 26 , 27 ], and two are based on the RQD system [ 28 , 29 ]. Additionally, there are two different design approaches based on Panek [ 30 ] and the number of discontinuiy sets and the dip angle of the discontinuities.

Mathematical-based and assumption-dependent numerical design approaches have been developed to determine stress and deformation behaviors in the opening zones of underground and surface rock engineering excavations. In today’s design practices, two- and three-dimensional numerical analysis programs developed in many significant studies are utilized [ 31 , 32 , 33 , 34 , 35 ]. In the two- and three-dimensional models presented in numerical analyses, alongside rock material and rock mass parameters, input parameters for support, such as rock bolt length, spacing, and shotcrete thickness, need to be initially defined. The necessity to perform hundreds of numerical simulations to determine unknown optimal support parameters is considered a serious problem. Additionally, the presentation of field stresses and displacement quantities, which cannot always be measured, using numerical analyses necessitates the careful use of numerical approaches, especially in complex field conditions [ 12 ].

This study aims to analyze the feasibility of rock bolting in an underground coal mine gallery with a thick coal seam, which is currently using steel supports, through numerical modeling. Within this scope, the Ömerler Underground Mine, located in the West Lignite Enterprise (GLI), which is a part of the Turkish Coal Enterprises (TKI) and situated in the Tavşanlı district of Kütahya province, has been selected as the study area. Coal production in the Ömerler underground coal mine, which possesses a thick coal seam, is carried out using mechanized mining methods. The mining operation, which utilizes the longwall top coal caving method (LTCC) as its production method, employs a self-advancing hydraulic powered roof support system (shields) within the coal face while using a steel arch support system in the main haulage galleries and the gates belonging to the panels.

To examine the applicability of rock bolting support in the mentioned headgate, in situ and laboratory studies were conducted to determine rock mass and material properties. Based on the obtained database, empirical and numerical analyses were performed to design an appropriate rock bolting system for the field. FLAC3D v6.0 finite difference method modeling software was used for numerical modeling. With the resulting design, the resin-grouted rebar rock bolting system (RBR) and steel arch support (SAS) were implemented in the pilot application area. In situ monitoring activities using various methods were conducted to assess the performance differences between the rock bolting systems and the steel arch support system.

2 Methodology

2.1 study site.

TKI-GLI Ömerler Underground coal mine is located in the town of Tunçbilek (Tavşanlı district of Kütahya province) in Turkey (Fig.  1 ).

figure 1

Location of study site

The rock units within the Tunçbilek series are grouped into three main categories, namely clay stone, calcareous marl, and marl. The clay stone formation, which surrounds the coal seam, is also subdivided into three subgroups. These subgroups consist of the soft clay layer located immediately above the coal seam with a thickness ranging from 20 to 50 cm, the roof clay forming the main roof rock of this formation, and the floor clay formations situated beneath the coal seam (Fig.  2 ).

figure 2

Lithology of geological structure

In the GLI Tunçbilek coal basin, underground coal production has been carried out in the Ömerler-A section. In this underground mine, a fully mechanized mining system is used, and coal extraction is performed using the LTCC method. The thick coal seam, averaging 8 m in thickness, is excavated using a single-pass method for the lower 3.5 m, while the remaining approximately 5 m at the roof level is extracted through the caving process. The cross-sectional view of the mining method is presented in Fig.  2 .

In the basin, strata have generally dip angles ranging from 5 to 20° toward the northeast. The coal reserve within the study area is estimated to be around 18 million tons. The coal seam thickness varies between 5 and 12 m, with an average thickness of 8 m [ 36 , 37 , 38 ]. The coal seam contains clay partings of approximately 15–30 cm thickness at various levels. The deepest working section in the underground mine is located at an elevation of + 469, and the thickness of the overlying strata is approximately 330 m.

The study area is the A6 longwall panel in the Ömerler underground coal mine (Fig.  3 ). Rock mass and rock material property determination studies for coal and surrounding rock were carried out in the A1, A2, and A6 longwall panels. Empirical design studies and numerical modeling studies, rock bolting applications, and monitoring activities were conducted in the headgate of the A6 panel (Fig.  3 ).

figure 3

The view of the A6 panel which was considered in rock bolting support design studies on the mine layout

2.2 Determination of Rock Mass and Rock Material Properties

In order to design and implement rock bolting support for the A6 longwall panel gateroads, a series of in situ and laboratory investigations were executed to ascertain rock mass and rock material properties [ 11 , 12 ]. These studies not only encompassed the A6 longwall panel where the actual implementation occurred but also included examinations in the A1 and A2 longwall panels, where preparatory and production activities were concurrently in progress. Drilling operations and block extraction studies were carried out in specific underground zones to assess rock mass and material properties, as well as for classification studies.

Underground drilling activities were carried out in the production panel, encompassing four directions. Additionally, 50 blocks were extracted from the A1 and A2 panels and transported to the rock mechanics laboratory for subsequent sample preparation. Schmidt hammer rebound hardness tests (N-type), point load strength index tests (Is 50 ), and plate loading tests were systematically performed in the A1, A2, and A6 panels.

All rock mechanics tests were executed on the rock material samples derived from the transported blocks and drilling cores. The resulting database is comprehensively presented in Table  1 . Field-based Geological Strength Index (GSI) classification studies were conducted in the A1 preparatory gallery to classify the rock mass. The determined values, coupled with outcomes from other rock mass classification systems, were computed and are presented in Table  2 .

Accurate predictions regarding the stability of underground openings require an understanding of the mechanical properties of the rock mass and measurements of principal stresses in the environment [ 39 ]. In line with this, studies on principal stress analysis were undertaken within the A1 longwall panel of the Ömerler underground coal mine (Fig.  4 ).

figure 4

An example of principal in situ stress measurement on a fault and definition and measurement of fault lines

Following Aydan’s method [ 40 ] for determining principal in situ stresses through the fault slip approach, the analysis outcomes revealed that the maximum horizontal stress is predominantly aligned in the north–south direction. Additionally, it was established that at a depth of 300 m, the most significant horizontal principal stress ( P H = 6.74 MPa) in the A1 panel aligns parallel to the gate axis.

2.3 Deriving Initial Design Outcomes via Empirical Methods

The data obtained from experimental studies, observations, and examinations (Table  3 ) have been utilized to ascertain the design parameters for rock bolts through empirical methodologies. Specific analyses tailored to empirical techniques were conducted to define crucial dimensions such as bolt length ( L ) and bolt spacing ( S ) [ 11 , 12 ]. The summarized outcomes are delineated in Table  4 .

The determination of the rock bolts’ quantity ( N ) involved separate empirical approaches. After reviewing the N values outlined in Table  4 , the average N value was computed as 6. However, empirical methods also recommend incorporating shotcrete and/or steel mesh alongside rock bolting to enhance face stability.

In addition to the rock bolting design obtained through empirical approaches, following a comprehensive assessment of field inspections, observations, and engineering experiences, it has been determined that seven rock bolts will be applied for each line. The average values specified in Table  4 , along with these assessments, have influenced the design outcome shown in Fig.  5 . Additionally, Fig.  5 includes the layout for the coal seam.

figure 5

Rock bolt design based on empirical design results

As shown in Fig.  5 , the cross-section of the gallery illustrates the utilization of seven rock bolts, featuring a bolt length ( L ) of 3.3 m, a spacing between bolts ( S 1 ) of 1.0 m at the gallery face, and an interval between bolt lines ( S 2 ) of 1.0 m along the gallery axis. Based on field observations, measurements, experience, and engineering insights, the plan includes the installation of three roof-anchored rock bolts on the gallery roof in the curved section (P1, M, T1). Likewise, two inclined rock bolts with inclinations of 70° and 50° (T2 and T3, respectively) have been positioned on the pillar (T) side, and on the face (P) side, two inclined rock bolts with inclinations of 70° and 50° (P2 and P3, respectively) have been similarly installed.

2.4 Numerical Modeling

The initial design results illustrated in Fig.  5 underwent three-dimensional numerical analyses using FLAC3D v6.0 for the designated A6 panel in this investigation [ 11 ]. The analyses conducted include:

2.4.1 Modeling Procedure

During the modeling of the A6 longwall panel in the Ömerler underground coal mine, the current state of the mine was considered. The solid model incorporates the previously mined and subsided A5 panel, the planned pilot application A6 panel, and the untouched A7 panel on the opposite side of the A6 panel, as depicted in the underground mine map (Fig.  3 ). In the model geometry (Fig.  6 ), the z-direction signifies depth, the y-direction signifies the length of the longwall panel, and the x-direction represents the length of the longwall face. The model dimensions were defined as + x direction 300 m, −z direction 200 m, and + y direction 500 m.

In the model, the longwall face length in the + x direction is considered to be 90 m with a pillar width of 20 m. In the −z direction, the main strata have a thickness of 11 m, followed by a 140-m claystone unit above the coal seam, 10 m of backfill material above the claystone, and a 39-m claystone unit below the coal seam. The + y direction is defined as 500 m.

To analyze the dynamic effects arising from production in the gallery (longwall panel), it is assumed that the point where the longwall panel eliminates the initial backfill effect is at 450 m. This point is regarded as the starting point of the longwall excavation, implying that the initial 50 m of the panel has been worked and is left as a subsided area (Fig.  6 ).

figure 6

The geometry and details of the model created in FLAC 3D

The model geometry utilizes rectangular and square-shaped brick elements. The longwall excavation is presumed to be conducted in 1-m-thick slices, and for analyzing the dynamic effects it induces, the longwall panel is subdivided into 1-m grids between 450 and 400 m. The remaining sections are further divided into 5-m and 10-m grids. Consequently, the model comprises a total of 361,665 zones and 376,320 nodes (Fig.  6 ).

Boundary conditions in a numerical model involve the predetermined values of field variables (such as stress and displacement) set at the grid’s boundaries. Boundaries fall into two categories: real and artificial. Real boundaries correspond to features present in the physical object being modeled (e.g., a tunnel surface or the ground surface). Artificial boundaries, although nonexistent in reality, must be introduced to enclose the chosen number of zones. For the Ömerler underground coal mine model, roller boundaries are established on the left, right, front, and rear boundaries of the grid, while the bottom of the grid remains fixed. The results from principal stress analyses conducted in situ are incorporated into the model, with an initial condition of K 0  = 0.473 (σ h /σ v ) assigned, and the gravitational effect is also defined. In the FLAC 3D program used to create the model for the A6 longwall panel, gob material properties are specified, incorporating equations found in the literature. The mechanical behavior of the gob is represented by the double-yield model implemented in FLAC 3D.

Pappas and Mark [ 41 ] investigated the behavior of longwall gob material through laboratory tests, concluding that the equation proposed by Salamon [ 42 , 43 ] in the gob model yielded results closest to laboratory tests. In the Salamon gob model, the following equation is presented (Eq.  1 ).

In Eq. ( 1 ), σ represents the uniaxial stress (MPa) on the material, ε denotes the unit deformation of the material under stresses, E 0 stands for the initial tangent modulus (MPa), and ε m represents the maximum unit deformation possible in the compacted rock material.

Equation ( 1 ) was employed in modeling studies to ascertain the mechanical behavior of the gob. With each 1-m advancement in the face, the A6 panel is associated with a double-yield constitutive model for the 1-m section behind the coal face. Consequently, in the model, during the early stages of face advancement, the gob region represents an area that is broken and collapsed, incapable of withstanding the pressure from the roof. In this region, the gob undergoes slow compression, resulting in increased roof stresses.

For the A6 longwall panel model, the equation governing volumetric unit deformation behavior, along with deformation change values, is outlined in Table  5 and expressed in Eq. ( 2 ) (Tables 6 , 7 ).

The FLAC 3D utilized beam structural elements to simulate the supports of the A6 longwall panel gates in the Ömerler underground mine, incorporating SAS. These structural elements are characterized by their geometric and material properties within the FLAC 3D program. For the modeling of RBR, pile structural elements were employed in FLAC 3D. In the model, shell structural elements with an elastic modulus ( E ) of 180 GPa, Poisson’s ratio ( υ ) of 0.3, and a thickness of 45 cm were implemented to represent self-advancing hydraulic roof support units.

2.4.2 Identification and Definition of Monitoring Zones in the Model Geometry

Two separate models have been crafted for three-dimensional analyses. The model reinforced with a steel arch on galleries featuring a horseshoe cross-section is identified as SAS, while the model strengthened with resin-grouted rebar rock bolts is labeled as RBR. In both models, successive activities of preparation (stage 1) and reversible production (stage 2) are presumed to occur. To monitor the stresses and deformations produced in the model, a total of 120 monitoring points have been established. In assessing the numerical analysis results, the focus has been given to two station points positioned above the material gallery (headgate) adjacent to panels A5 and A6. These station points are U9 at 300 m and U3 at 429 m along the material gallery (Fig.  7 ).

figure 7

Placement of the monitoring points in the numerical model

2.4.3 Assumptions and Limitations in the 3D Model

Throughout the modeling process, specific assumptions and constraints were considered. These include:

In the model studies, σ 1 is presumed to be vertical (in the −z direction), while σ 2 and σ 3 are considered horizontal (in the x and y directions).

The length of the A6 longwall panel, originally spanning between 400 and 450 m, was approximated as 500 m in the y direction within the model.

To define the shield support units in the model, were represented using shell structural elements, the SAS with beam structural elements, and the RBR with pile structural elements.

The dip angle of the coal seam where the A6 longwall panel is situated was assumed to be 0° in the model. Additionally, groundwater was disregarded in the modeling studies, as excavation works are conducted above the underground water table.

2.5 Pilot Application and Monitoring Studies

At the TKI-GLI Ömerler underground coal mine, pilot application and monitoring studies to performance analysis of the two different support system have been carried out [ 11 , 12 ]. In the coal mine, a 45-m section located at the headgate of the A6 panel has been designated as a pilot area for testing and monitoring (Fig.  8 ). The commonly used SAS method in the mine has been employed as support in the initial 20-m section of this area. Convergence measurement stations (CO) have been established at three points to monitor displacements associated with coal production in the SAS zone.

Following the completion of the SAS zone, a rock bolting design specified in Fig.  5 has been implemented for the RBR support system in the 25-m section. To address potential safety issues arising from the RBR application in this 25-m zone, previously existing steel arches have been loosened and put into a passive state. Convergence measurement stations (CO) have been established at five points in the RBR zone to monitor displacements associated with coal production (Fig.  8 ).

figure 8

The application zones in the headgate and the number of CO stations in the A6 panel

During the RBR application, after scanning the roof, three rock bolts (P1, M, T1) were initially placed in the roof. Subsequently, angled rock bolts (P2, P3, T2, T3) were installed on both sides of the gallery at angles of 50° and 70°. The rock bolt length ( L ) and spacing ( S ) were taken as 3.3 m and 1 m, respectively, based on the design detailed in Fig.  5 (Fig.  9 ). Holes for rock bolts were drilled using a drilling machine with a 28-mm drill bit. Four resin cartridges with a diameter of 23 mm and a length of 60 cm were placed in each hole. The solidification time for the used resins is 180 s.

For the performance evaluation of the support systems in the pilot application area, monitoring systems have been implemented in both the 20-m SAS zone and the 25-m RBR zone. In each of these zones, sections equipped with convergence (CO) measurement stations at approximately 5-m intervals have been established (Fig.  8 ). Every shift, one measurement was taken, and measurements continued until the 45-m area was traversed and remained beneath the caving zone due to the in-seam production activities Measurements were continuously collected from these stations over time and in conjunction with the progress of the longwall excavation.

figure 9

Typical images of the stages of RBR installation

3 Results and Discussions

3.1 numerical modeling results.

The A6 panel, defined as a longwall panel with a headgate length of 500 m, was separately modeled for the SAS and RBR support systems. Each numerical model was run in two separate stages. In the first stage (stage 1), excavation and reinforcement of the gateroads for the A6 panel were performed. In the second stage (stage 2), the goaf behind the supports in the A6 panel was caved, the longwall face was prepared, and coal production was carried out at 1-m intervals [ 11 ].

The results of these two stages for both models were evaluated separately for the monitoring zone at the 300th meter of the headgate in the A6 panel (U9) and the monitoring point at the 429th meter (U3). Consequently, vertical displacements and changes in vertical secondary stresses were recorded at monitoring points U3 and U9, and the performances of SAS and RBR were assessed. Figure  10 presents the graphical representation of vertical displacement and changes in vertical secondary stresses for the SAS reinforcement model in stage 1. The data obtained from these graphs are specified in Table  8 .

figure 10

( a ) Vertical displacements and ( b ) vertical secondary stresses in the gallery roof at the 300th m of the headgate (monitoring point U9), during gateroad excavation and SAS reinforcement in the A6 panel (Stage-1)

The data in Table  8  and Fig.  10 show that vertical displacements and vertical secondary stresses remain very low in the first 200 m as the excavation face approaches the monitoring point located at the 300th meter of the headgate, denoted as U9. Even with 100 m remaining to reach the U9, the values appear to stay close to the initial primary values in the field. However, as the excavation face approaches the U9 at the 300th meter, vertical displacements and secondary stresses start to change rapidly, and when the excavation face reaches the U9, these values are U  = 57 mm and P =−40 kPa. After the excavation face passes the U9, vertical displacement continues to change rapidly up to the 400th meter. As the excavation face reaches from the 400th to the 500th meter in the gallery, it is understood that the vertical displacement values at the remaining U9 in the gallery change very little, reaching U  = 96.2 mm. Similarly, it is understood that the vertical stress values undergo very little change up to the 500th meter after the excavation face passes the U9. When the excavation face reaches the 500th meter in the gallery, it is observed that the vertical stress values at the remaining U9 point in the gallery remain almost constant, reaching P =−10 kPa.

The vertical displacement and secondary stress values at the U3 monitoring point located at 429 m on the A6 longwall panel (Fig.  7 ) were determined using FLAC 3D. The model outputs showing the vertical displacement and secondary stress values at the U3 during the period from the start of excavation in the longwall face to the 18th meter of the advancing excavation face in the completed preparation panel (stage 2) are presented in Fig.  11 . The vertical displacement and secondary stress values observed in the model outputs are presented in Table  9 .

Table  9  and Fig.  11 illustrate the longwall advancement at 1-m intervals in the model. The monitored U3 point is located at the 429th meter of the headgate, initially positioned 18 m behind the excavation face. As excavation progresses in the longwall face, displacements and stresses at the U3 monitoring point begin to change. While displacements are relatively minor within the first 10 m, they rapidly escalate within the final 8 m (Fig.  11 ). Upon the longwall excavation advancing 18 m and reaching the U3 monitoring point, the total displacement amounts to U  = 376 mm. Initially characterized by tensile stresses, stress variations subsequently transition to compressive stresses. Upon complete advancement of the longwall excavation to the U3 monitoring point, the stress value in the gallery roof amounts to P =−1.8 kPa.

Similar to the SAS support model, the gateroads of the A6 longwall panel were modeled using the RBR support system. Vertical displacement and secondary stress results at the monitoring point labeled U9 are presented in Fig.  12  and Table  10 .

figure 11

( a ) Vertical displacements and ( b ) vertical secondary stresses in the gallery roof at the 429th m of the headgate (monitoring point U3), during gateroad excavation and SAS reinforcement in the A6 panel (Stage-2)

figure 12

( a ) Vertical displacements and ( b ) vertical secondary stresses in the gallery roof at the 300th m of the headgate (monitoring point U9), during gateroad excavation and RBR reinforcement in the A6 panel (Stage-1)

Table  10  and Fig.  12 reveal that within the initial 200 m, vertical displacement and secondary stress values remain relatively low as the excavation face progresses towards the monitoring point at the 300th meter of the headgate, identified as U9. Even with only 100 m left to reach U9, these values stay close to their initial measurements. However, as the excavation face nears U9, both vertical displacements and secondary stresses start to rise rapidly. Upon reaching U9, these values reach U  = 14.7 mm and P =−3463 kPa. Beyond U9, vertical displacement continues to fluctuate rapidly until the 400th meter. From the 400th to the 500th meter, minimal changes are observed in vertical displacement at the remaining U9, stabilizing at U  = 26.4 mm. Similarly, vertical stress values decrease rapidly until the 400th meter after surpassing U9, then stabilize until the 500th meter. At the 500th meter, vertical stress values at the remaining U9 remain almost constant, reaching P =−3344 kPa (Table  10 ; Fig.  12 ).

The vertical displacement and secondary stress results at the monitoring point labeled U3, up to the 18th excavation of the 1-m coal face (stage 2), for the RBR support model are presented in Fig.  13  and Table  11 .

figure 13

( a ) Vertical displacements and ( b ) vertical secondary stresses in the gallery roof at the 429th m of the headgate (monitoring point U3), during gateroad excavation and RBR reinforcement in the A6 panel (Stage-2)

As seen in Table  11  and Fig.  13 , the longwall advancement in the model was executed at 1-m intervals. The monitored U3 point is located at the 429th meter of the gateroad. This point is initially positioned 18 m behind the excavation face. As excavation progresses in the longwall face, displacements and stresses at the U3 monitoring point begin to change. Displacement values, which initiate within the first 10 m, rapidly escalate within the final 8 m (Fig.  13 ). Upon the longwall excavation advancing 18 m and reaching the U3 monitoring point, the total displacement amounts to U  = 106 mm. While stress variations remain nearly constant up to the last 8 m, they subsequently decrease rapidly, culminating at a stress value of P =−1840 kPa when the excavation face reaches the U3 monitoring point (Table  11 ; Fig.  13 ).

3.2 In Situ Measurement Results

Continuous convergence measurements were taken from CO stations set up in the field to determine the performance of the existing SAS reinforcement in the 20-m zone in front of the coal face where production takes place and the subsequent 25-m zone where RBR is applied [ 12 ]. In the first 20-m section with SAS located just ahead of the coal excavation face, measurements were taken from three CO (CO-1, CO-2, and CO-3) stations, while in the following 25-m section with RBR, measurements were taken from five CO measurement stations (CO-4, CO-5, CO-6, CO-7, and CO-8). The measurement results obtained over time are presented in Fig.  14 , and those related to the distance from the coal face are depicted in Fig.  15 .

As depicted in Fig.  14 , the blue curves representing the SAS zone have experienced faster and larger deformations compared to the red curves representing the RBR zone. Convergence values in the SAS zone ranged from 234.46 to 391.60 mm, while in the RBR zone, they ranged from 105.78 to 203.94 mm. Due to the distance of 10 m between the last station CO-3 within the SAS zone and the starting point of the RBR zone, it is considered that the RBR zone affects this point within the SAS zone, resulting in convergence values measured at the CO-3 station to be lower than those at CO-1 and CO-2 stations.

figure 14

Time-dependent convergence curves obtained in the SAS and RBR zones of the A6 headgate

figure 15

Convergence curves as a function of distance from the coal face obtained in the SAS and RBR zones of the A6 headgate

Figure  15 considers the progress distances in the coal face. As depicted in the presented graph, in the SAS-zone (blue curves), convergence values reach 391.602 mm at the CO-1 station, which is closest to the excavation face. Similarly, in the RBR zone, the convergence value measured at the CO-4 station, closest to the excavation face, is 203.94 mm. As the excavation face progresses in the RBR zone, the convergence values read at the other CO-5, CO-6, CO-7, and CO-8 stations in this zone reach a maximum of 105.78 mm. This situation is related to the liberation of the gallery roof from the influence of the SAS zone.

When considering the maximum displacement values observed under the same conditions for both zones, it is determined that the RBR-reinforced zone performs its duty of supporting the roof during coal production activities 52% more successfully than the SAS-reinforced zone.

3.3 Numerical Modeling and In Situ Measurement Results Evaluation

In this section, the numerical modeling results and in situ measurement results of vertical displacement values associated with the advancement of the coal face have been compared on the same graph for SAS and RBR support systems (Fig.  16 ). Vertical displacement values from the numerical models created for SAS and RBR support systems were compared with convergence measurement results obtained in the field at the CO-2 station located in the middle of the SAS zone and the CO-6 station located in the middle of the RBR zone, relative to the monitored U3 observation point. These results are presented on the same graph in Fig.  16 .

figure 16

Comparison of numerical modeling and in situ measurement results for vertical displacement values due to 18-m coal excavation for SAS and RBR

As seen in Fig.  16 , according to the numerical model results, a vertical displacement of 250 mm occurred in the roof over the 18-m advancement of the coal face for the SAS, while the in situ measurements indicate this value to be 363.76 mm. For the RBR, the numerical model results predict a vertical displacement of 51 mm over the 18-m advancement of the coal face, whereas the in situ measurements show this value to be 76.36 mm. Consequently, Fig.  16 reveals that both the numerical model results and the in situ measurement results indicate that the RBR reinforcement system performs its support function more effectively during the advancement of the coal face compared to the SAS reinforcement system.

In Fig.  16 , differences between the vertical displacement results obtained from numerical analysis and in situ measurements are observed in both RBR and SAS systems. These differences are due to the local complex geological conditions of the measurement area and the unsystematic nature of mining activities. In other words, these differences are associated with problems hindering the systematic progress of mining activities in the A6 panel. While activities such as reinforcement design are systematically conducted in numerical analyses, real-world mining schedules are not always executed as planned due to issues like belt malfunctions, ventilation problems, and equipment breakdowns. Predicting and integrating such unforeseen circumstances into numerical models, especially in underground mining activities, is quite challenging. A similar situation holds for the Ömerler coal mine site discussed in this study. Therefore, the differences observed between the numerical analysis outputs and the actual measurement results in Fig.  16 can be attributed to irregular delays caused by unwanted issues in mining activities. In conclusion, the comparison presented in Fig.  16 demonstrates that the RBR system provides more effective support during the advancement of the coal face compared to the SAS system. This finding is supported by both the numerical model results and the in situ measurements.

4 Conclusions

In this study, the feasibility of rock bolting support in an underground coal mine gallery with a thick coal seam has been investigated through numerical modeling. In this context, the Ömerler underground coal mine area, belonging to the TKI-GLI which is located in the Tavşanlı district of Kütahya province, was selected as the study area. Rock mass and rock material properties were determined through in situ and laboratory studies conducted in the mine, and experimental, empirical, and numerical analyses were performed based on the obtained data. Numerical modeling was conducted using FLAC3D v6.0 finite difference method modeling software.

According to the design results obtained, the RBR and SAS were implemented in the pilot application area. In situ monitoring studies were carried out to evaluate the performance of the RBR and SAS systems in this pilot application area.

The results of the numerical modeling indicated that there were less displacement and less secondary stress change in the RBR-supported area compared to the SAS-supported area. Additionally, in situ measurements showed that the RBR more successfully supported the roof during coal production activities. It was determined that the RBR supported the roof 52% more effectively compared to the SAS.

These findings demonstrate that when evaluating the applicability of rock bolting support systems in underground galleries with thick coal seams in the Ömerler underground coal mine, the RBR system is a more effective solution. This study emphasizes the importance of more sustainable and secure support systems to enhance operational efficiency in the coal mining industry.

Kaygusuz K (2009) Energy and environmental issues relating to greenhouse gas emissions for sustainable development in Turkey. Renew Sustain Energy Rev 13:253–270. https://doi.org/10.1016/j.rser.2007.07.009

Article   Google Scholar  

Jiang L, Xue D, Wei Z, Chen Z, Mirzayev M, Chen Y, Chen Z (2022) Coal decarbonization: a state-of-the-art review of enhanced hydrogen production in underground coal gasification. Energy Reviews 1:10004. https://doi.org/10.1016/j.enrev.2022.100004

Brodny J, Felka D, Tutak M (2023) Applying an automatic gasometry system and a fuzzy set theory to assess the state of gas hazard during the coal mining production process. Eng Sci 23:891. https://doi.org/10.30919/es891

Monika, Govil H, Guha S (2023) Underground mine deformation monitoring using Synthetic aperture radar technique: a case study of Rajgamar coal mine of Korba Chhattisgarh, India. J Appl Geophys 209:104899. https://doi.org/10.1016/j.jappgeo.2022.104899

International Energy Agency (IEA) (2022) World energy outlook. https://www.iea.org/reports/world-energy-outlook-2022 . Accessed 19 Jul 2023

Çelik A, Özçelik Y (2023) Investigation of the effect of caving height on the efficiency of the longwall top coal caving production method applied in inclined and thick coal seams by physical modeling. Int J Rock Mech Min Sci 162:105304. https://doi.org/10.1016/j.ijrmms.2022.105304

Brinzan D (2012) Fault analysis and operational reliability of longwall mining shearers. Environ Eng Manage J 11(7):1241–1246

Ghosh GK, Sivakumar C (2018) Application of underground microseismic monitoring for ground failure and secure longwall coal mining operation: a case study in an Indian mine. J Appl Geophys 150:21–39. https://doi.org/10.1016/j.jappgeo.2018.01.004

Mesutoğlu M, Özkan İ (2019) An evaluation on in-situ Schmidt Hardness Index and Point load strength test results performed in large scale coal face. Konya J Eng Sci 7(4):681–695. https://doi.org/10.36306/konjes.654446

Wang Y, He M, Yang J, Wang Q, Liu J, Tian X, Gao Y (2020) Case study on pressure-relief mining technology without advance tunneling and coal pillars in longwall mining. Tunn Undergr Sp 97:103236. https://doi.org/10.1016/j.tust.2019.103236

Mesutoglu M (2019) Determination of rock bolt and steel set behaviors used in control of longwall tailgate roof strata by numerical analysis. Dissertation, Konya Technical University

Ozkan I, Genis M, Uysal O, Mesutoglu M (2022) New technology for roof support of coal roadways in our national underground coal mining: Design of roof bolt systems. TUBITAK Project No:116M698, Final Report

Cao C, Nemcik J, Aziz N, Ren T (2012) Failure models of rock bolting underground mines. In: Proceedings of the 2012 Coal Operators’ Conference, 16-17 February, NSW, Australia

Kang H, Wu Y, Gao F, Lin J, Jiang P (2013) Fracture characteristics in rock bolts in underground coal mine roadways. Int J Rock Mech Min Sci 62:105–112. https://doi.org/10.1016/j.ijrmms.2013.04.006

Ghadimi M, Shahriar K, Jalalifar H (2015) A new analytical solution for the displacement of fully grouted rock bolt in rock joints and experimental and numerical verifications. Tunn Undergr Sp 50:143–151. https://doi.org/10.1016/j.tust.2015.07.014

Li X, Si G, Oh J, Corbett P, O’Sullivan T, Xiang Z, Aziz N, Mirzaghorbanali A (2022) Effect of pretension on the performance of cable bolts and its optimization in underground coal mines with various geological conditions. Int J Rock Mech Min Sci 152:105076. https://doi.org/10.1016/j.ijrmms.2022.105076

Chen J, Liu P, Liu L, Zeng B, Zhao H, Zhan C, Zhang J, Li D (2022) Anchorage performance of a modified cable anchor subjected to different joint opening conditions. Constr Build Mater 336:127558. https://doi.org/10.1016/j.conbuildmat.2022.127558

Peng SS, Tang DHY (1984) Roof bolting in underground mining: a state-of-the-art review. Int J Min Eng 2:1–42. https://doi.org/10.1007/BF00880855

Brown ET (1999) The evolution of support and reinforcement philosophy and practice for underground mining excavations. In: Proceedings of the International Symposium on Ground Support, 15-17 March, Kalgoorlie, Western Australia

Bieniawski ZT (1973) Engineering classification of jointed rock masses. Civil Eng South Afr 15(12):335–344

Google Scholar  

Ünal E (1983) Development of design guidelines and roof control standards for coal mine roofs. Dissertation, The Pennsylvania State University

Ünal E (1986) A rock-bolt design method, developed for roadways. In: Proceedings of 5th Coal Congress of Turkey, 3–9 May, Zonguldak, Turkey

Ünal E (1989) Support selection of mine roadways by means of a computer program. In: Proceedings of 30th US Symposium on Rock Mechanics, Morgantown, USA

Venkateswarlu V (1986) Geomechanics classification of coal measure rocks vis-a-vis roof supports. Dissertation, Indian School of Mines

Lowson A, Bieniawski ZT (2013) Critical assessment of RMR based tunnel design practices: a practical engineer’s approach, In: Proceedings of Rapid Excavation & Tunneling Conference, 23-26 June, Washington, USA

Barton N, Lien R, Lunde J (1974) Engineering classification of rock masses for the design of tunnel support. Rock Mech 6(4):189–236

Grimstad E, Barton N (1995) Rock mass classification and the use of NMT in India. In: Proceedings of Conference on Design and Construction of Underground Structures, 23-25 February, New Delhi, India

Deere DU, Peck RB, Parker HW, Monsees JE, Schmidt B (1970) Design of tunnel support systems: highway research record. Highway Res Board 339:26–33

Merritt AH (1972) Geologic prediction for underground excavations. In: Proceedings of North American Rapid Excavation and Tunneling Conference, 5-7 June, Chicago, USA

Panek LA (1964) Design for bolting stratified roof. Trans SME 229:113–119

Jiang JQ, Sun CJ, Yin ZD et al (2004) Study and practice of bottom slicing relieving mining in condition of trouble coal seams with high ground stress under deep coal mine. J China Coal Soc 29:1–6

Feng GR, Zhang XY, Li JJ et al (2009) Feasibility on the upward mining of the left-over coal above goaf with pillar supporting method. J China Coal Soc 34:726–730

Feng GR, Ren YF, Wang XX et al (2011) Experimental study on the upward mining of the left-over coal above gob area mined with caving method in Baijiazhuang Coal Mine. J China Coal Soc 36:544–550

Wang SF, Li XB, Wang SY (2017) Separation and fracturing in overlying strata disturbed by longwall mining in a mineral deposit seam. Eng Geol 226:257–266. https://doi.org/10.1016/j.enggeo.2017.06.015

Yang L, Yuqi R, Xinghai L, Nan W, Xiangyang J (2023) Numerical modeling and onsite detection analysis of upward mining feasibility of residual coal from multi–gobs in close–multiple coal seams. Min Metall Explor 40:1153–1169. https://doi.org/10.1007/s42461-023-00796-0

Celik R (2005) Developing of moving procedure for powered supports in Ömerler coal mine. Dissertation, Osmangazi University

Mesutoğlu M, Özkan İ (2019) In-situ application of Schmidt hammer test on a coal face with large-scale. Proceedings of the 14th International Congress on Rock Mechanics and Rock Engineering (ISRM 2019), September 13–18, 2019, Foz do Iguassu, Brazil, 3790. https://doi.org/10.1201/9780367823177

Mesutoğlu M, Özkan İ, Rodriguez-Dono A (2024) Determination by numerical modeling of stress-strain variations resulting from gallery cross-section changes in a longwall top coal caving panel. Konya J Eng Sci 12(1):231–250. https://doi.org/10.36306/konjes.1410892

Genis M, Aydan O (2007) Static and dynamic stability of a large underground opening. In: Proceedings of the 2nd Symposium on Underground Excavations for Transportation (in Turkish), 15-17 November, Istanbul, Turkey

Aydan Ö (2000) A new stress inference method for the stress state of Earth’s crust and its application. Bull Earth Sci 22:223–236

Pappas DM, Mark C (1993) Behavior of simulated longwall gob material. Department of the Interior, Bureau of Mines, RI: 9458-32, USA

Salamon M (1990) Mechanism of caving in longwall coal mining. Rock mechanics contributions and challenges. Proceedings of the 31st US symposium 161–168. https://doi.org/10.1201/9781003078944

He F, Xu X, Qin B, Li L, Lv K, Li X (2022) Study on deformation mechanism and control technology of surrounding rock during reuse of gob side entry retaining by roof pre-splitting. Eng Fail Anal 137:106271. https://doi.org/10.1016/j.engfailanal.2022.106271

Download references

Acknowledgements

The authors would like to thank the staff of the Turkish Coal Enterprises (TKI) and West Lignite Enterprise (GLI) for their assistance in the field studies.

Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK). This study was supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under the 1001 project (Project No: 116M698), and financial support for this research was provided by the Directorate of the Research Fund of Selcuk University (Project number 16101009). 

Author information

Authors and affiliations.

Department of Mining Engineering, Konya Technical University, Konya, Turkey

Mehmet Mesutoglu & Ihsan Ozkan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mehmet Mesutoglu .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Mesutoglu, M., Ozkan, I. Evaluation and Comparison of Rock Bolting Versus Steel Arch Support Systems in Thick Coal Seam Underground Galleries: A Case Study. Mining, Metallurgy & Exploration (2024). https://doi.org/10.1007/s42461-024-00994-4

Download citation

Received : 14 February 2024

Accepted : 24 April 2024

Published : 05 June 2024

DOI : https://doi.org/10.1007/s42461-024-00994-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Coal mining
  • Top coal caving
  • Find a journal
  • Publish with us
  • Track your research
  • Systematic Review
  • Open access
  • Published: 29 May 2024

Risk factors of chronic postoperative pain after total knee arthroplasty: a systematic review

  • Junfei Li 1 ,
  • Tingyu Guan 1 ,
  • Yue Zhai 1 &
  • Yuxia Zhang 2  

Journal of Orthopaedic Surgery and Research volume  19 , Article number:  320 ( 2024 ) Cite this article

306 Accesses

Metrics details

There is a lack of relevant studies to grade the evidence on the risk factors of chronic pain after total knee arthroplasty (TKA), and only quantitative methods are used for systematic evaluation. The review aimed to systematically identify risk factors of chronic postoperative pain following TKA and to evaluate the strength of the evidence underlying these correlations.

PubMed, Web of Science, Cochrane Library, Embase, and CINAHL databases were searched from initiation to September 2023. Cohort studies, case-control studies, and cross-sectional studies involving patients undergoing total knee replacement were included. A semi-quantitative approach was used to grade the strength of the evidence-based on the number of investigations, the quality of the studies, and the consistency of the associations reported by the studies.

Thirty-two articles involving 18,792 patients were included in the final systematic review. Ten variables were found to be strongly associated with postoperative pain, including Age, body mass index (BMI), comorbidities condition, preoperative pain, chronic widespread pain, preoperative adverse health beliefs, preoperative sleep disorders, central sensitization, preoperative anxiety, and preoperative function. Sixteen factors were identified as inconclusive evidence.

Conclusions

This systematic review clarifies which risk factors could be involved in future research on TKA pain management for surgeons and patients. It highlights those factors that have been controversial or weakly correlated, emphasizing the need for further high-quality studies to validate them. Most crucially, it can furnish clinicians with vital information regarding high-risk patients and their clinical attributes, thereby aiding in the development of preventive strategies to mitigate postoperative pain following TKA.

Trial registration

This systematic review has been registered on the PROSPERO platform (CRD42023444097).

Introduction

Total knee arthroplasty (TKA) is the most common surgical intervention for patients with end-stage osteoarthritis [ 1 ].Despite a positive outcome for most patients, a sizeable portion of individuals experience significant pain following TKA [ 2 ]. Previous studies showed that the percentage of patients with unfavorable long-term pain outcomes ranged 10% ∼ to 34% following knee replacement [ 3 ]. The International Association for the Study of Pain (IASP) defines chronic postoperative pain (CPSP) as pain that persists for more than 3 months after surgery, excluding other causes (e.g., infection, surgical failure, recurrence of malignancy, etc.) [ 4 ]. In addition to disruption of daily activities brought on by the pain itself, adverse or chronic pain outcomes following joint replacement are of great concern to orthopedic surgeons and their patients. Chronic postoperative pain is also associated with deterioration in physical, functional, and mental domains, which implies significant personal, social, and healthcare costs with the rising prevalence of knee replacement surgeries [ 5 ].

Understanding the risk factors affecting chronic postsurgical pain can help increase the clinical staff’s understanding of the field, which can help clinicians make better decisions and help patients reduce the risk of developing chronic pain. Previous pain guidelines have only recommended perioperative interventions without doing an integration of risk factors [ 6 ]. Earlier systematic reviews that applied quantitative measures to identify predictors of persistent pain after TKA, without considering the grading of evidence, may result in limited quality outcome [ 7 ].

Therefore, this study will conduct a systematic review and critical appraisal of the risk factors affecting chronic pain after TKA, and use the Newcastle-Ottawa Scale (NOS) and the Agency for Healthcare Research and Quality (AHRQ) checklist to quality rate the level of evidence in the included literature.

This article used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) statement to guide implementation and reporting [ 8 ].

Data sources and search strategy

Five databases were searched (PubMed, Web of Science, Cochrane Library, Embase, CINAHL) from the time of the foundation of the database to July 2023. All pertinent keyword variations were used, including both the Medical Subject Headings (Mesh) of various databases as well as the free-text versions of these terms. Reference lists of selected studies and reviews were searched to find additional publications on the subject. Detailed information about the search strategy is shown in Appendix 1 .

Study selection and eligibility criteria

Studies meeting the following criteria were included: (1) cohort studies or case-control or cross-sectional studies; (2) patients undergoing total knee arthroplasty who are aged above 18 years old; (3) the outcome was defined as postoperative pain following total knee arthroplasty and follow-up had to be at least three months; (4) outcomes were predicted using preoperative, intraoperative or postoperative conditions. If total hip arthroplasty (THA) and total knee arthroplasty (TKA) patients were both included in the study, only TKA data were extracted. The exclusion criteria were as follows: (1) publications written in languages other than English and Chinese, (2) studies with incomplete methodology and full text not available. In addition, given the large number of possible confounding variables, cohort studies that failed to use a multivariate approach to assess risk factors were excluded.

Screening and data extraction

The titles and abstracts of all preliminary identified studies were screened by two investigators (JL and TG) independently following the selection criteria. Any differences of opinion were settled by consensus or discussion with a third independent reviewer. If there were multiple publications available, the most recent data were taken. To gather pertinent data, a predesigned electronic data extraction form was used. If there were multiple publications available, the most recent data were taken. The following information was extracted: participant characteristics, risk factors, pain outcome measures, follow-up period, and study design.

Assessing the risk of bias

The risk of bias assessment was independently assessed by two authors (JL and TG) in each included study by using the Newcastle Ottawa Quality Assessment Scale (NOS) and the checklist recommended by the Agency for Healthcare Research and Quality (AHRQ) [ 9 ].

The Newcastle-Ottawa Scale (NOS) is an important tool that evaluates case-control and cohort studies. It is composed of three main sections, which include a total of eight items. These sections cover various aspects of the study, including the selection of the study population, comparability, and exposure/outcome evaluation. The NOS uses a semi-quantitative star system to rate the study’s quality, with a maximum score of nine stars. Studies were categorized as high quality (7–9 points), moderate quality (4–6 points), and low quality (0–3 points) [ 10 ]. To evaluate the quality of the cross-sectional studies, we utilized the checklist recommended by the Agency for Healthcare Research and Quality (AHRQ). The AHRQ Risk of Bias Evaluation Tool assesses the risk of bias in five domains, including selection bias, implementation bias, follow-up bias, detection bias, and reporting bias. If the answer was “no” or “unclear”, the score was 0. If the answer is “yes”, the score is 1. Articles are rated as low (0–3), moderate (4–7), or high quality (8–11) [ 11 ].

Data synthesis and analysis

Semi-quantitative methods were used to summarize the strength of evidence supporting the association between risk factors and chronic postoperative pain. The best evidence synthesis included variables that were examined using a multivariate approach in at least two studies and demonstrated a statistically significant association. Three criteria were used to quantitatively evaluate the evidence of risk factors for chronic pain following total knee replacement: (1) the number of studies evaluating the variables; (2) the standard of the scores for each variable under assessment; (3) the consistency of the relationship between the factors and chronic postoperative pain. When 75% of the studies evaluating the variable reported the same direction of association, associations were deemed consistent [ 12 ]. Variables analyzed using multivariate methods that yielded no association were also taken into account. The level of evidence on risk factors for postoperative chronic pain was categorized into the following four categories: (1) strong: consistent findings were found in ≥ 2 high-quality articles; (2) moderate: with consistent results between 1 high-quality article and ≥ 1 moderate quality article or ≥ 3 moderate or low-quality articles; (3) inconclusive: When observed associations are inconsistent or assessed in 1 high-quality, < 3 moderate-quality studies or only in low-quality studies; (4) no association: no significant association was found in the high-quality multivariate analysis, or at least 3 high-quality studies found no association in the univariate analysis.

Study identification

Database search returned 18,792 articles, and 7 relevant articles were obtained through supplements from other resources. A total of 17,526 articles were obtained after eliminating duplicates. 17,239 references were excluded from the initial screening by reading titles and abstracts, leaving 287 references for full-text review. Among the remaining articles, 105 did not cover the outcome of concern, 66 did not match the target population, the full text was not available for one study, and 61 were excluded for other reasons. Therefore, a total of 32 studies were included in the systematic evaluation including five cross-sectional studies, one case-control study, and 26 cohort studies. The flowchart and reasons for exclusion are delineated in Fig.  1 .

figure 1

Flowchart of study selection

Study characteristics

A total of 32,645 patients who underwent primary total knee arthroplasty were enrolled in this study (see Table  1 ). The sample size ranged from 71 to 11,373. The commonly used outcome measurement instruments in the studies were the visual analog scale (VAS) (10 studies), Western Ontario and McMaster Osteoarthritis Index (WOMAC) pain scale (8 studies), and the Numerical Rating Scale (NRS) (7 studies). Five studies included total knee arthroplasty and total hip arthroplasty from which we extracted data for TKA. Study follow-up lasted a minimum of 3 months and a maximum of 10 years. Furthermore, 29 predictive factors associated with the development of postoperative chronic pain after TKA were identified.

Methodologic quality of included reviews

The research primarily focused on high or medium-quality literature, with no low-quality literature included in the analysis. The quality of cohort studies was evaluated using the NOS scale, with ratings ranging from moderate (four) to high (nine). The case-control study received a score of six out of nine on the NOS scale, indicating a moderate level of evidence. Five cross-sectional studies were assessed for quality using AHRQ, with three receiving a high-quality rating and two receiving a moderate rating. The scores for these studies ranged from 6 to 11. In studies rated as moderate quality, the most frequent reasons were attributed to the presence of confounding and measurement bias. Nine cohort studies have not reported or controlled for confounders, which may have led to an elevated risk of confounding bias. Furthermore, four cross-sectional studies exhibited indications of measurement bias, and the handling of missing data were not disclosed in the publication. The quality evaluation of the included studies according to the NOS and AHRQ checklist are shown in Appendix 2 .

The level of evidence for risk factors

Twenty-nine risk factors associated with the incidence of postoperative chronic pain were identified. The results of the best evidence analysis are presented in Table  2 . Upon conducting the study, it was found that ten variables exhibited a significant association with the onset of chronic pain following total knee arthroplasty (TKA). Age, body mass index (BMI), and comorbidities condition were discovered to possess strong evidence among demographic variables. As for preoperative factors, strong evidence was observed for preoperative pain, chronic widespread pain, preoperative adverse health beliefs, preoperative sleep disorders, central sensitization, preoperative anxiety, and preoperative function. No risk factors were strongly associated with the development of chronic pain among intraoperative and postoperative factors. Additionally, three factors were found to have a moderate association with outcome variables, namely gender, preoperative depression, and pain trajectory. At length, sixteen risk factors were identified as inconclusive, with the majority of them being statistically linked to chronic pain after TKA in just one study.

A total of 32 studies were included in our review, with a focus on case-control, cohort, and cross-sectional studies, and the grade of evidence in the literature was evaluated using the NOS scale, a quality assessment tool for cohort/case-control studies, and the AHRQ, a quality assessment tool for cross-sectional studies. Overall, the quality level of literature included in this study was high, and the reason for articles with a moderate level of evidence rating was the presence of potential confounding bias or measurement bias in the study. Twenty-nine risk factors connected with the development of chronic postoperative pain were identified, among which ten exhibited a strong correlation, three showed a moderate correlation, and sixteen factors yielded inconclusive results.

We employed a semi-quantitative approach to evaluate the level of evidence for risk factors and, in contrast to previous studies, identified two novel factors that exhibit a strong association with chronic pain following knee replacement surgery: preoperative sleep disturbances and preoperative poor health beliefs.

According to recent research that utilized machine learning and a large sample size, it has been determined that sleep problems can have a significant impact on chronic pain [ 13 ]. When we sleep, our body’s natural pain relief system is activated, and any disruptions to this system due to sleep deprivation or disturbances can negatively affect it [ 14 ]. A study was conducted to delve deeper into the relationship between sleep quality before total knee arthroplasty surgery and postoperative chronic pain syndrome (CPSP) [ 15 ]. The findings revealed that individuals who experienced sleep problems before the surgery were more likely to report higher pain scores three months after the procedure. This highlights the importance of addressing any pre-existing sleep issues before undergoing surgery to minimize the risk of postoperative chronic pain.

Health beliefs are thoughts, attitudes, or expectations that influence the experience of health and illness and related behaviors. Predictors such as illness perception, pain catastrophizing, preoperative expectations, and coping attitudes were grouped into the category of preoperative health beliefs in our article. Seven high-quality articles and one moderately quality article have demonstrated a statistically significant correlation between preoperative negative health beliefs and chronic postoperative pain [ 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 ]. Research has shown that patients who experience greater levels of preoperative pain catastrophizing are more likely to suffer from moderate to severe pain after surgery. A study conducted by Giusti et al. has revealed that behavioral outcomes can forecast pain and functional outcomes up to 12 months after surgery [ 24 ]. Additionally, the study suggests that these outcomes partially mediate the relationship between catastrophizing and subsequent pain and function. Furthermore, a cohort study has identified the existence of psychological risk factors that may hinder the implementation of proper pain coping strategies and lead to the development of chronic postoperative pain.

Our review identified sixteen factors with insufficient evidence, as they were only statistically associated with CPSP in one study upon critical appraisal and lacked support from other literature. This highlights the necessity for further validation of these under-evidenced factors in future studies, specifically investigating their association with chronic pain. Moreover, it is crucial to prioritize factors backed by robust evidence and develop interventional clinical protocols based on these high-risk factors to provide comprehensive guidance to clinicians and nurses.

Limitations

This study has several limitations. In this systematic review, we only included patients with primary TKA and excluded those undergoing revision surgery and uni-compartmental arthroplasty; therefore, our findings may not extrapolate to other types of patients.

One of the major challenges in our study was the heterogeneity in the design of the included studies. We also found variations in the outcome indicators and measurement techniques used, which might account for the discrepancies in the results and hinder the integration of these findings.

Furthermore, we observed that some of the studies analyzed in this review did not adjust for potential confounders in their analyses. Confounding could have contributed to bias in our findings to some extent. Therefore, we recommend that future studies should put these factors into consideration when analyzing their results.

Clinical implications

This systematic review can inspire future personalized pain prevention and management measures. Enhanced monitoring of patient-reported pain before and early after surgery may lead to early detection and potential early intervention of patients at risk for CPSP. Early identification and targeted treatment of pain may reduce pain and prevent long-term disability. Improving awareness of the importance of biological, sociocultural, psychological, physical, and clinical factors will help to implement the role of interventions better.

This systematic review aims to assess the risk factors that contribute to the emergence of chronic pain after total knee arthroplasty. It further endeavors to appraise the evidence supporting these factors quantitatively. This analysis strives to enlighten surgeons and patients alike on potential risk factors that deserve exploration in future TKA pain management research, particularly those that have generated controversy or displayed weak correlations. Importantly, it underscores the necessity for additional high-quality studies to confirm these factors, thereby equipping clinicians with crucial knowledge regarding high-risk patients and their clinical characteristics. In turn, this knowledge contributes to the formulation of effective preventive measures aimed at reducing postoperative pain following TKA.

Data availability

No datasets were generated or analysed during the current study.

Hamilton D, Henderson GR, Gaston P, MacDonald D, Howie C, Simpson AH. Comparative outcomes of total hip and knee arthroplasty: a prospective cohort study. Postgrad Med J. 2012;88(1045):627–31.

Article   PubMed   Google Scholar  

Fuzier R, Rousset J, Bataille B, Salces-y-Nédéo A, Maguès JP. One half of patients reports persistent pain three months after orthopaedic surgery. Anaesth Crit Care Pain Med. 2015;34(3):159–64.

Beswick AD, Wylde V, Gooberman-Hill R, Blom A, Dieppe P. What proportion of patients report long-term pain after total hip or knee replacement for osteoarthritis? A systematic review of prospective studies in unselected patients. BMJ Open. 2012;2(1):e000435.

Article   PubMed   PubMed Central   Google Scholar  

Schug SA, Lavand’homme P, Barke A, Korwisi B, Rief W, Treede RD. The IASP classification of chronic pain for ICD-11: chronic postsurgical or posttraumatic pain. Pain. 2019;160(1):45–52.

Kim DH, Pearson-Chauhan KM, McCarthy RJ, Buvanendran A. Predictive factors for developing chronic Pain after total knee arthroplasty. J Arthroplasty. 2018;33(11):3372–8.

Wainwright TW, Gill M, McDonald DA, Middleton RG, Reed M, Sahota O, et al. Consensus statement for perioperative care in total hip replacement and total knee replacement surgery: enhanced recovery after surgery (ERAS(®)) Society recommendations. Acta Orthop. 2020;91(1):3–19.

Lewis GN, Rice DA, McNair PJ, Kluger M. Predictors of persistent pain after total knee arthroplasty: a systematic review and meta-analysis. Br J Anaesth. 2015;114(4):551–61.

Article   CAS   PubMed   Google Scholar  

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010;25(9):603–5.

Martínez-González L, Fernández-Villa T, Molina AJ, Delgado-Rodríguez M, Martín V. Incidence of Anorexia Nervosa in women: a systematic review and Meta-analysis. Int J Environ Res Public Health. 2020;17(11).

Liu L, Cai XC, Sun XY, Zhou YQ, Jin MZ, Wang J, et al. Global prevalence of metabolic syndrome in patients with psoriasis in the past two decades: current evidence. J Eur Acad Dermatol Venereol. 2022;36(11):1969–79.

Gosselt AN, Slooter AJ, Boere PR, Zaal IJ. Risk factors for delirium after on-pump cardiac surgery: a systematic review. Crit Care. 2015;19(1):346.

Miettinen T, Mäntyselkä P, Hagelberg N, Mustola S, Kalso E, Lötsch J. Machine learning suggests sleep as a core factor in chronic pain. Pain. 2021;162(1):109–23.

Haack M, Simpson N, Sethna N, Kaur S, Mullington J. Sleep deficiency and chronic pain: potential underlying mechanisms and clinical implications. Neuropsychopharmacology. 2020;45(1):205–16.

Luo ZY, Li LL, Wang D, Wang HY, Pei FX, Zhou ZK. Preoperative sleep quality affects postoperative pain and function after total joint arthroplasty: a prospective cohort study. J Orthop Surg Res. 2019;14(1):378.

Yan Z, Liu M, Wang X, Wang J, Wang Z, Liu J, et al. Construction and Validation of Machine Learning Algorithms to Predict Chronic Post-surgical Pain among patients undergoing total knee arthroplasty. Pain Manag Nurs; 2023.

Lindberg MF, Miaskowski C, Rustøen T, Cooper BA, Aamodt A, Lerdal A. Preoperative risk factors associated with chronic pain profiles following total knee arthroplasty. Eur J Pain. 2021;25(3):680–92.

Shim J, McLernon DJ, Hamilton D, Simpson HA, Beasley M, Macfarlane GJ. Development of a clinical risk score for pain and function following total knee arthroplasty: results from the TRIO study. Rheumatol Adv Pract. 2018;2(2):rky021.

Rice DA, Kluger MT, McNair PJ, Lewis GN, Somogyi AA, Borotkanics R, et al. Persistent postoperative pain after total knee arthroplasty: a prospective cohort study of potential risk factors. Br J Anaesth. 2018;121(4):804–12.

Yakobov E, Scott W, Stanish W, Dunbar M, Richardson G, Sullivan M. The role of perceived injustice in the prediction of pain and function after total knee arthroplasty. Pain. 2014;155(10):2040–6.

Sullivan M, Tanzer M, Reardon G, Amirault D, Dunbar M, Stanish W. The role of presurgical expectancies in predicting pain and function one year following total knee arthroplasty. Pain. 2011;152(10):2287–93.

Riddle DL, Wade JB, Jiranek WA, Kong X. Preoperative pain catastrophizing predicts pain outcome after knee arthroplasty. Clin Orthop Relat Res. 2010;468(3):798–806.

Larsen DB, Laursen M, Edwards RR, Simonsen O, Arendt-Nielsen L, Petersen KK. The combination of Preoperative Pain, conditioned Pain Modulation, and Pain Catastrophizing predicts Postoperative Pain 12 months after total knee arthroplasty. Pain Med. 2021;22(7):1583–90.

Giusti EM, Manna C, Varallo G, Cattivelli R, Manzoni GM, Gabrielli S et al. The predictive role of executive functions and psychological factors on Chronic Pain after Orthopaedic surgery: a longitudinal cohort study. Brain Sci. 2020;10(10).

Download references

This study was supported by the National Key R&D Programmes (NKPs) subproject of China, Grant Numbered: No.2020YFC2008404-3.

Author information

Authors and affiliations.

Fudan University, Shanghai, China

Junfei Li, Tingyu Guan & Yue Zhai

Zhongshan Hospital, Shanghai, China

Yuxia Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

Study concept and design: Junfei Li, Yuxia Zhang. Data acquisition analysis, or interpretation: Junfei Li, Tingyu Guan. Quality assessment: Junfei Li, Tingyu Guan. Manuscript preparation: Junfei Li. Critical revision of the manuscript: Yue Zhai, Yuxia Zhang. Study supervision and obtained funding: Yuxia Zhang. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Yuxia Zhang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Li, J., Guan, T., Zhai, Y. et al. Risk factors of chronic postoperative pain after total knee arthroplasty: a systematic review. J Orthop Surg Res 19 , 320 (2024). https://doi.org/10.1186/s13018-024-04778-w

Download citation

Received : 03 April 2024

Accepted : 02 May 2024

Published : 29 May 2024

DOI : https://doi.org/10.1186/s13018-024-04778-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Chronic pain
  • Pain, postoperative
  • Arthroplasty, replacement, knee
  • Risk factor

Journal of Orthopaedic Surgery and Research

ISSN: 1749-799X

case study approach evaluation

IMAGES

  1. case study approach example

    case study approach evaluation

  2. 49 Free Case Study Templates ( + Case Study Format Examples + )

    case study approach evaluation

  3. Case study planning, implementation and evaluation provide an overview

    case study approach evaluation

  4. FREE 9+ Case Study Analysis Samples in PDF

    case study approach evaluation

  5. How to Create a Case Study + 14 Case Study Templates

    case study approach evaluation

  6. Case Analysis: Examples + How-to Guide & Writing Tips

    case study approach evaluation

VIDEO

  1. Effective EUDR Compliance

  2. McDowell Salon Series: Perspectives of Teachers Who Engage in Regular Physical Activity

  3. Obesity Institute presents: Development of a national whole systems approach evaluation framework

  4. 10دراسات جدوى باستخدام الاكسل.wmv

  5. 052

  6. TSA

COMMENTS

  1. Case study

    This paper, authored by Edith D. Balbach for the California Department of Health Services is designed to help evaluators decide whether to use a case study evaluation approach. This guide, written by Linda G. Morra and Amy C. Friedlander for the World Bank, provides guidance and advice on the use of case studies. Last updated: 30 October 2021.

  2. Case Study Evaluation Approach

    A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups. An individual, a location, or a project may serve as the focal point of a case study's ...

  3. Designing process evaluations using case study to explore the context

    A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. ... one appropriate methodological approach is ...

  4. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  5. What Is a Case Study?

    Revised on November 20, 2023. A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are ...

  6. Case Study Method: A Step-by-Step Guide for Business Researchers

    Case study method is the most widely used method in academia for researchers interested in qualitative research ... Discovering the future of the case study. Method in evaluation research. Evaluation Practice, 15, 283-290. Crossref. Google Scholar. Yin R. K. (2002). Case study research: Design and methods. Thousand Oaks, CA: Sage.

  7. Qualitative Research: Case study evaluation

    Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy. This is the last in a series of seven articles describing non ...

  8. PDF Evaluation Models, Approaches, and Designs

    The Four-Level Model. This approach is most often used to evaluate training and development programs (Kirkpatrick, 1994). It focuses on four levels of training outcomes: reactions, learning, behavior, and results. The major question guiding this kind of evaluation is, "What impact did the training. 101.

  9. Case study research for better evaluations of complex interventions

    Background The need for better methods for evaluation in health research has been widely recognised. The 'complexity turn' has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might ...

  10. The case study approach

    The case study approach is particularly useful to employ when there is a need to obtain an in-depth appreciation of an issue, event or phenomenon of interest, in its natural real-life context. ... Keen J, Packwood T. Qualitative research; case study evaluation. BMJ. 1995; 311:444-446. [PMC free article] [Google Scholar]

  11. Chapter 3

    The case study approach is a focused, in-depth examination of one or more specific and clearly defined cases (individuals, programs, organizations, communities, or even countries). ... The purpose of case studies in evaluation is often to explore and better understand how a program was implemented and to identify causal processes and ...

  12. Monitoring and evaluation approaches

    The case study evaluation approach is a powerful tool for monitoring and evaluating the success of a program or initiative. It allows researchers to look at the impact of a program from multiple perspectives, including the behavior of participants and the effectiveness of interventions. By using a case study evaluation approach, researchers can ...

  13. Case Study Research Method in Psychology

    The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies. ... It is helpful for illustrating certain topics within an evaluation. Multiple-case studies: Used to explore differences between cases and replicate findings across cases. Helpful for ...

  14. PDF Case-based Evaluation

    case analysis. Tracer studies (or longitudinal studies) can also be used as a basis for case-based evaluation. In a tracer study a number of cases are followed over time. Analysis may be across cases and/or over time. Again, if used within a case-based evaluation the whole approach of the evaluation would largely be dictated by the methodology.

  15. Case Study Evaluation

    mental approaches to evaluation. The case study is an alternative approach?in effect, a different way of thinking about complex situations which takes the conditions into account, but is nevertheless rigorous and facilitates informed judgments about success or failure. The design of case studies As noted earlier, case studies using qualitative

  16. Case Study Evaluation: Past, Present and Future Challenges:

    This chapter gives one version of the recent history of evaluation case study. It looks back over the emergence of case study as a sociological method, developed in the early years of the 20th Century and celebrated and elaborated by the Chicago School of urban sociology at Chicago University, starting throughout the 1920s and 1930s.

  17. PDF Case Study Evaluations

    Case studies are appropriate for determining the effects of programs or projects and reasons for success or failure. OED does most impact evaluation case studies for this purpose. The method is often used in combination with others, such as sample surveys, and there is a mix of qualitative and quantitative data.

  18. CRITIC-PROMETHEE II-Based Evaluation of Smart Community ...

    The purpose of the section "Methodology" is to create an evaluation indicator system and introduce an evaluation method. Section "Case Study" demonstrates a detailed introduction to the case study and data collection process. Section "Results" provides a summary and introduction to the research findings.

  19. Modelling environmental life cycle performance of alternative marine

    However, in the evaluation of the environmental performance of these alternat … Modelling environmental life cycle performance of alternative marine power configurations with an integrated experimental assessment approach: A case study of an inland passenger barge Sci Total Environ. 2024 Jun 3:173661. doi: 10.1016 ...

  20. Translator's (In)visibility: A Case Study of Howard Goldblatt's

    Increasing studies approach translation from alternative aspects, either borrowing different methodologies or concepts from other disciplines. These various attempts have expanded the field of translation studies to a broader area with a focus on either intercultural studies or the translator's studies.

  21. Case study research for better evaluations of complex interventions

    The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or 'real-life', settings. ... base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health ...

  22. Full article: A side-sampling based Linformer model for landslide

    5.3. Sensitivity analysis of the optimal fixed area in the side-sampling method. In the above study, the side-sampling method with M = 3 was chosen to successfully improve the model's landslide susceptibility assessment. Consequently, it is worthwhile to investigate whether capturing additional neighborhood unit features can further improve ...

  23. Evaluation and Comparison of Rock Bolting Versus Steel Arch ...

    This study investigates the feasibility of rock bolting support in an underground coal mine gallery with a thick coal seam. The Ömerler underground coal mine working area, owned by the West Lignite Enterprise (GLI) of the Turkish Coal Enterprises (TKI), was selected for this purpose. Longwall top coal caving (LTCC) is implemented as the production method in the Ömerler underground coal mine ...

  24. Evaluation of WHO normative function at country level: Maldives country

    Overview. This is a case study in Maldives conducted for the evaluation of WHOʻs normative function at the country level, focusing on four of six normative products. Report. Executive summary. Annexes. Evaluation brief. Ethiopia case study. Jordan case study.

  25. Risk factors of chronic postoperative pain after total knee

    Background There is a lack of relevant studies to grade the evidence on the risk factors of chronic pain after total knee arthroplasty (TKA), and only quantitative methods are used for systematic evaluation. The review aimed to systematically identify risk factors of chronic postoperative pain following TKA and to evaluate the strength of the evidence underlying these correlations. Methods ...

  26. Foods

    The appearance of dried fruit clearly influences the consumer's perception of the quality of the product but is a subtle and nuanced characteristic that is difficult to quantitatively measure, especially online. This paper describes a method that combines several simple strategies to assess a suitable surrogate for the elusive quality using imaging, combined with multivariate statistics and ...