• Utility Menu

University Logo

GA4 Tracking Code

Gen ed writes, writing across the disciplines at harvard college.

  • Comparative Analysis

What It Is and Why It's Useful

Comparative analysis asks writers to make an argument about the relationship between two or more texts. Beyond that, there's a lot of variation, but three overarching kinds of comparative analysis stand out:

  • Coordinate (A ↔ B): In this kind of analysis, two (or more) texts are being read against each other in terms of a shared element, e.g., a memoir and a novel, both by Jesmyn Ward; two sets of data for the same experiment; a few op-ed responses to the same event; two YA books written in Chicago in the 2000s; a film adaption of a play; etc. 
  • Subordinate (A  → B) or (B → A ): Using a theoretical text (as a "lens") to explain a case study or work of art (e.g., how Anthony Jack's The Privileged Poor can help explain divergent experiences among students at elite four-year private colleges who are coming from similar socio-economic backgrounds) or using a work of art or case study (i.e., as a "test" of) a theory's usefulness or limitations (e.g., using coverage of recent incidents of gun violence or legislation un the U.S. to confirm or question the currency of Carol Anderson's The Second ).
  • Hybrid [A  → (B ↔ C)] or [(B ↔ C) → A] , i.e., using coordinate and subordinate analysis together. For example, using Jack to compare or contrast the experiences of students at elite four-year institutions with students at state universities and/or community colleges; or looking at gun culture in other countries and/or other timeframes to contextualize or generalize Anderson's main points about the role of the Second Amendment in U.S. history.

"In the wild," these three kinds of comparative analysis represent increasingly complex—and scholarly—modes of comparison. Students can of course compare two poems in terms of imagery or two data sets in terms of methods, but in each case the analysis will eventually be richer if the students have had a chance to encounter other people's ideas about how imagery or methods work. At that point, we're getting into a hybrid kind of reading (or even into research essays), especially if we start introducing different approaches to imagery or methods that are themselves being compared along with a couple (or few) poems or data sets.

Why It's Useful

In the context of a particular course, each kind of comparative analysis has its place and can be a useful step up from single-source analysis. Intellectually, comparative analysis helps overcome the "n of 1" problem that can face single-source analysis. That is, a writer drawing broad conclusions about the influence of the Iranian New Wave based on one film is relying entirely—and almost certainly too much—on that film to support those findings. In the context of even just one more film, though, the analysis is suddenly more likely to arrive at one of the best features of any comparative approach: both films will be more richly experienced than they would have been in isolation, and the themes or questions in terms of which they're being explored (here the general question of the influence of the Iranian New Wave) will arrive at conclusions that are less at-risk of oversimplification.

For scholars working in comparative fields or through comparative approaches, these features of comparative analysis animate their work. To borrow from a stock example in Western epistemology, our concept of "green" isn't based on a single encounter with something we intuit or are told is "green." Not at all. Our concept of "green" is derived from a complex set of experiences of what others say is green or what's labeled green or what seems to be something that's neither blue nor yellow but kind of both, etc. Comparative analysis essays offer us the chance to engage with that process—even if only enough to help us see where a more in-depth exploration with a higher and/or more diverse "n" might lead—and in that sense, from the standpoint of the subject matter students are exploring through writing as well the complexity of the genre of writing they're using to explore it—comparative analysis forms a bridge of sorts between single-source analysis and research essays.

Typical learning objectives for single-sources essays: formulate analytical questions and an arguable thesis, establish stakes of an argument, summarize sources accurately, choose evidence effectively, analyze evidence effectively, define key terms, organize argument logically, acknowledge and respond to counterargument, cite sources properly, and present ideas in clear prose.

Common types of comparative analysis essays and related types: two works in the same genre, two works from the same period (but in different places or in different cultures), a work adapted into a different genre or medium, two theories treating the same topic; a theory and a case study or other object, etc.

How to Teach It: Framing + Practice

Framing multi-source writing assignments (comparative analysis, research essays, multi-modal projects) is likely to overlap a great deal with "Why It's Useful" (see above), because the range of reasons why we might use these kinds of writing in academic or non-academic settings is itself the reason why they so often appear later in courses. In many courses, they're the best vehicles for exploring the complex questions that arise once we've been introduced to the course's main themes, core content, leading protagonists, and central debates.

For comparative analysis in particular, it's helpful to frame assignment's process and how it will help students successfully navigate the challenges and pitfalls presented by the genre. Ideally, this will mean students have time to identify what each text seems to be doing, take note of apparent points of connection between different texts, and start to imagine how those points of connection (or the absence thereof)

  • complicates or upends their own expectations or assumptions about the texts
  • complicates or refutes the expectations or assumptions about the texts presented by a scholar
  • confirms and/or nuances expectations and assumptions they themselves hold or scholars have presented
  • presents entirely unforeseen ways of understanding the texts

—and all with implications for the texts themselves or for the axes along which the comparative analysis took place. If students know that this is where their ideas will be heading, they'll be ready to develop those ideas and engage with the challenges that comparative analysis presents in terms of structure (See "Tips" and "Common Pitfalls" below for more on these elements of framing).

Like single-source analyses, comparative essays have several moving parts, and giving students practice here means adapting the sample sequence laid out at the " Formative Writing Assignments " page. Three areas that have already been mentioned above are worth noting:

  • Gathering evidence : Depending on what your assignment is asking students to compare (or in terms of what), students will benefit greatly from structured opportunities to create inventories or data sets of the motifs, examples, trajectories, etc., shared (or not shared) by the texts they'll be comparing. See the sample exercises below for a basic example of what this might look like.
  • Why it Matters: Moving beyond "x is like y but also different" or even "x is more like y than we might think at first" is what moves an essay from being "compare/contrast" to being a comparative analysis . It's also a move that can be hard to make and that will often evolve over the course of an assignment. A great way to get feedback from students about where they're at on this front? Ask them to start considering early on why their argument "matters" to different kinds of imagined audiences (while they're just gathering evidence) and again as they develop their thesis and again as they're drafting their essays. ( Cover letters , for example, are a great place to ask writers to imagine how a reader might be affected by reading an their argument.)
  • Structure: Having two texts on stage at the same time can suddenly feel a lot more complicated for any writer who's used to having just one at a time. Giving students a sense of what the most common patterns (AAA / BBB, ABABAB, etc.) are likely to be can help them imagine, even if provisionally, how their argument might unfold over a series of pages. See "Tips" and "Common Pitfalls" below for more information on this front.

Sample Exercises and Links to Other Resources

  • Common Pitfalls
  • Advice on Timing
  • Try to keep students from thinking of a proposed thesis as a commitment. Instead, help them see it as more of a hypothesis that has emerged out of readings and discussion and analytical questions and that they'll now test through an experiment, namely, writing their essay. When students see writing as part of the process of inquiry—rather than just the result—and when that process is committed to acknowledging and adapting itself to evidence, it makes writing assignments more scientific, more ethical, and more authentic. 
  • Have students create an inventory of touch points between the two texts early in the process.
  • Ask students to make the case—early on and at points throughout the process—for the significance of the claim they're making about the relationship between the texts they're comparing.
  • For coordinate kinds of comparative analysis, a common pitfall is tied to thesis and evidence. Basically, it's a thesis that tells the reader that there are "similarities and differences" between two texts, without telling the reader why it matters that these two texts have or don't have these particular features in common. This kind of thesis is stuck at the level of description or positivism, and it's not uncommon when a writer is grappling with the complexity that can in fact accompany the "taking inventory" stage of comparative analysis. The solution is to make the "taking inventory" stage part of the process of the assignment. When this stage comes before students have formulated a thesis, that formulation is then able to emerge out of a comparative data set, rather than the data set emerging in terms of their thesis (which can lead to confirmation bias, or frequency illusion, or—just for the sake of streamlining the process of gathering evidence—cherry picking). 
  • For subordinate kinds of comparative analysis , a common pitfall is tied to how much weight is given to each source. Having students apply a theory (in a "lens" essay) or weigh the pros and cons of a theory against case studies (in a "test a theory") essay can be a great way to help them explore the assumptions, implications, and real-world usefulness of theoretical approaches. The pitfall of these approaches is that they can quickly lead to the same biases we saw here above. Making sure that students know they should engage with counterevidence and counterargument, and that "lens" / "test a theory" approaches often balance each other out in any real-world application of theory is a good way to get out in front of this pitfall.
  • For any kind of comparative analysis, a common pitfall is structure. Every comparative analysis asks writers to move back and forth between texts, and that can pose a number of challenges, including: what pattern the back and forth should follow and how to use transitions and other signposting to make sure readers can follow the overarching argument as the back and forth is taking place. Here's some advice from an experienced writing instructor to students about how to think about these considerations:

a quick note on STRUCTURE

     Most of us have encountered the question of whether to adopt what we might term the “A→A→A→B→B→B” structure or the “A→B→A→B→A→B” structure.  Do we make all of our points about text A before moving on to text B?  Or do we go back and forth between A and B as the essay proceeds?  As always, the answers to our questions about structure depend on our goals in the essay as a whole.  In a “similarities in spite of differences” essay, for instance, readers will need to encounter the differences between A and B before we offer them the similarities (A d →B d →A s →B s ).  If, rather than subordinating differences to similarities you are subordinating text A to text B (using A as a point of comparison that reveals B’s originality, say), you may be well served by the “A→A→A→B→B→B” structure.  

     Ultimately, you need to ask yourself how many “A→B” moves you have in you.  Is each one identical?  If so, you may wish to make the transition from A to B only once (“A→A→A→B→B→B”), because if each “A→B” move is identical, the “A→B→A→B→A→B” structure will appear to involve nothing more than directionless oscillation and repetition.  If each is increasingly complex, however—if each AB pair yields a new and progressively more complex idea about your subject—you may be well served by the “A→B→A→B→A→B” structure, because in this case it will be visible to readers as a progressively developing argument.

As we discussed in "Advice on Timing" at the page on single-source analysis, that timeline itself roughly follows the "Sample Sequence of Formative Assignments for a 'Typical' Essay" outlined under " Formative Writing Assignments, " and it spans about 5–6 steps or 2–4 weeks. 

Comparative analysis assignments have a lot of the same DNA as single-source essays, but they potentially bring more reading into play and ask students to engage in more complicated acts of analysis and synthesis during the drafting stages. With that in mind, closer to 4 weeks is probably a good baseline for many single-source analysis assignments. For sections that meet once per week, the timeline will either probably need to expand—ideally—a little past the 4-week side of things, or some of the steps will need to be combined or done asynchronously.

What It Can Build Up To

Comparative analyses can build up to other kinds of writing in a number of ways. For example:

  • They can build toward other kinds of comparative analysis, e.g., student can be asked to choose an additional source to complicate their conclusions from a previous analysis, or they can be asked to revisit an analysis using a different axis of comparison, such as race instead of class. (These approaches are akin to moving from a coordinate or subordinate analysis to more of a hybrid approach.)
  • They can scaffold up to research essays, which in many instances are an extension of a "hybrid comparative analysis."
  • Like single-source analysis, in a course where students will take a "deep dive" into a source or topic for their capstone, they can allow students to "try on" a theoretical approach or genre or time period to see if it's indeed something they want to research more fully.
  • DIY Guides for Analytical Writing Assignments

For Teaching Fellows & Teaching Assistants

  • Types of Assignments
  • Unpacking the Elements of Writing Prompts
  • Formative Writing Assignments
  • Single-Source Analysis
  • Research Essays
  • Multi-Modal or Creative Projects
  • Giving Feedback to Students

Assignment Decoder

Sociology Group: Welcome to Social Sciences Blog

How to Do Comparative Analysis in Research ( Examples )

Comparative analysis is a method that is widely used in social science . It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and explain every element of data that comparing. 

Comparative Analysis in Social SCIENCE RESEARCH

We often compare and contrast in our daily life. So it is usual to compare and contrast the culture and human society. We often heard that ‘our culture is quite good than theirs’ or ‘their lifestyle is better than us’. In social science, the social scientist compares primitive, barbarian, civilized, and modern societies. They use this to understand and discover the evolutionary changes that happen to society and its people.  It is not only used to understand the evolutionary processes but also to identify the differences, changes, and connections between societies.

Most social scientists are involved in comparative analysis. Macfarlane has thought that “On account of history, the examinations are typically on schedule, in that of other sociologies, transcendently in space. The historian always takes their society and compares it with the past society, and analyzes how far they differ from each other.

The comparative method of social research is a product of 19 th -century sociology and social anthropology. Sociologists like Emile Durkheim, Herbert Spencer Max Weber used comparative analysis in their works. For example, Max Weber compares the protestant of Europe with Catholics and also compared it with other religions like Islam, Hinduism, and Confucianism.

To do a systematic comparison we need to follow different elements of the method.

1. Methods of comparison The comparison method

In social science, we can do comparisons in different ways. It is merely different based on the topic, the field of study. Like Emile Durkheim compare societies as organic solidarity and mechanical solidarity. The famous sociologist Emile Durkheim provides us with three different approaches to the comparative method. Which are;

  • The first approach is to identify and select one particular society in a fixed period. And by doing that, we can identify and determine the relationship, connections and differences exist in that particular society alone. We can find their religious practices, traditions, law, norms etc.
  •  The second approach is to consider and draw various societies which have common or similar characteristics that may vary in some ways. It may be we can select societies at a specific period, or we can select societies in the different periods which have common characteristics but vary in some ways. For example, we can take European and American societies (which are universally similar characteristics) in the 20 th century. And we can compare and contrast their society in terms of law, custom, tradition, etc. 
  • The third approach he envisaged is to take different societies of different times that may share some similar characteristics or maybe show revolutionary changes. For example, we can compare modern and primitive societies which show us revolutionary social changes.

2 . The unit of comparison

We cannot compare every aspect of society. As we know there are so many things that we cannot compare. The very success of the compare method is the unit or the element that we select to compare. We are only able to compare things that have some attributes in common. For example, we can compare the existing family system in America with the existing family system in Europe. But we are not able to compare the food habits in china with the divorce rate in America. It is not possible. So, the next thing you to remember is to consider the unit of comparison. You have to select it with utmost care.

3. The motive of comparison

As another method of study, a comparative analysis is one among them for the social scientist. The researcher or the person who does the comparative method must know for what grounds they taking the comparative method. They have to consider the strength, limitations, weaknesses, etc. He must have to know how to do the analysis.

Steps of the comparative method

1. Setting up of a unit of comparison

As mentioned earlier, the first step is to consider and determine the unit of comparison for your study. You must consider all the dimensions of your unit. This is where you put the two things you need to compare and to properly analyze and compare it. It is not an easy step, we have to systematically and scientifically do this with proper methods and techniques. You have to build your objectives, variables and make some assumptions or ask yourself about what you need to study or make a hypothesis for your analysis.

The best casings of reference are built from explicit sources instead of your musings or perceptions. To do that you can select some attributes in the society like marriage, law, customs, norms, etc. by doing this you can easily compare and contrast the two societies that you selected for your study. You can set some questions like, is the marriage practices of Catholics are different from Protestants? Did men and women get an equal voice in their mate choice? You can set as many questions that you wanted. Because that will explore the truth about that particular topic. A comparative analysis must have these attributes to study. A social scientist who wishes to compare must develop those research questions that pop up in your mind. A study without those is not going to be a fruitful one.

2. Grounds of comparison

The grounds of comparison should be understandable for the reader. You must acknowledge why you selected these units for your comparison. For example, it is quite natural that a person who asks why you choose this what about another one? What is the reason behind choosing this particular society? If a social scientist chooses primitive Asian society and primitive Australian society for comparison, he must acknowledge the grounds of comparison to the readers. The comparison of your work must be self-explanatory without any complications.

If you choose two particular societies for your comparative analysis you must convey to the reader what are you intended to choose this and the reason for choosing that society in your analysis.

3 . Report or thesis

The main element of the comparative analysis is the thesis or the report. The report is the most important one that it must contain all your frame of reference. It must include all your research questions, objectives of your topic, the characteristics of your two units of comparison, variables in your study, and last but not least the finding and conclusion must be written down. The findings must be self-explanatory because the reader must understand to what extent did they connect and what are their differences. For example, in Emile Durkheim’s Theory of Division of Labour, he classified organic solidarity and Mechanical solidarity . In which he means primitive society as Mechanical solidarity and modern society as Organic Solidarity. Like that you have to mention what are your findings in the thesis.

4. Relationship and linking one to another

Your paper must link each point in the argument. Without that the reader does not understand the logical and rational advance in your analysis. In a comparative analysis, you need to compare the ‘x’ and ‘y’ in your paper. (x and y mean the two-unit or things in your comparison). To do that you can use likewise, similarly, on the contrary, etc. For example, if we do a comparison between primitive society and modern society we can say that; ‘in the primitive society the division of labour is based on gender and age on the contrary (or the other hand), in modern society, the division of labour is based on skill and knowledge of a person.

Demerits of comparison

Comparative analysis is not always successful. It has some limitations. The broad utilization of comparative analysis can undoubtedly cause the feeling that this technique is a solidly settled, smooth, and unproblematic method of investigation, which because of its undeniable intelligent status can produce dependable information once some specialized preconditions are met acceptably.

Perhaps the most fundamental issue here respects the independence of the unit picked for comparison. As different types of substances are gotten to be analyzed, there is frequently a fundamental and implicit supposition about their independence and a quiet propensity to disregard the mutual influences and common impacts among the units.

One more basic issue with broad ramifications concerns the decision of the units being analyzed. The primary concern is that a long way from being a guiltless as well as basic assignment, the decision of comparison units is a basic and precarious issue. The issue with this sort of comparison is that in such investigations the depictions of the cases picked for examination with the principle one will in general turn out to be unreasonably streamlined, shallow, and stylised with contorted contentions and ends as entailment.

However, a comparative analysis is as yet a strategy with exceptional benefits, essentially due to its capacity to cause us to perceive the restriction of our psyche and check against the weaknesses and hurtful results of localism and provincialism. We may anyway have something to gain from history specialists’ faltering in utilizing comparison and from their regard for the uniqueness of settings and accounts of people groups. All of the above, by doing the comparison we discover the truths the underlying and undiscovered connection, differences that exist in society.

Also Read: How to write a Sociology Analysis? Explained with Examples

comparative analysis of research papers

Sociology Group

We believe in sharing knowledge with everyone and making a positive change in society through our work and contributions. If you are interested in joining us, please check our 'About' page for more information

What is comparative analysis? A complete guide

Last updated

18 April 2023

Reviewed by

Jean Kaluza

Comparative analysis is a valuable tool for acquiring deep insights into your organization’s processes, products, and services so you can continuously improve them. 

Similarly, if you want to streamline, price appropriately, and ultimately be a market leader, you’ll likely need to draw on comparative analyses quite often.

When faced with multiple options or solutions to a given problem, a thorough comparative analysis can help you compare and contrast your options and make a clear, informed decision.

If you want to get up to speed on conducting a comparative analysis or need a refresher, here’s your guide.

Make comparative analysis less tedious

Dovetail streamlines comparative analysis to help you uncover and share actionable insights

  • What exactly is comparative analysis?

A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets.

For instance, you could use comparative analysis to investigate how your product features measure up to the competition.

After a successful comparative analysis, you should be able to identify strengths and weaknesses and clearly understand which product is more effective.

You could also use comparative analysis to examine different methods of producing that product and determine which way is most efficient and profitable.

The potential applications for using comparative analysis in everyday business are almost unlimited. That said, a comparative analysis is most commonly used to examine

Emerging trends and opportunities (new technologies, marketing)

Competitor strategies

Financial health

Effects of trends on a target audience

  • Why is comparative analysis so important? 

Comparative analysis can help narrow your focus so your business pursues the most meaningful opportunities rather than attempting dozens of improvements simultaneously.

A comparative approach also helps frame up data to illuminate interrelationships. For example, comparative research might reveal nuanced relationships or critical contexts behind specific processes or dependencies that wouldn’t be well-understood without the research.

For instance, if your business compares the cost of producing several existing products relative to which ones have historically sold well, that should provide helpful information once you’re ready to look at developing new products or features.

  • Comparative vs. competitive analysis—what’s the difference?

Comparative analysis is generally divided into three subtypes, using quantitative or qualitative data and then extending the findings to a larger group. These include

Pattern analysis —identifying patterns or recurrences of trends and behavior across large data sets.

Data filtering —analyzing large data sets to extract an underlying subset of information. It may involve rearranging, excluding, and apportioning comparative data to fit different criteria. 

Decision tree —flowcharting to visually map and assess potential outcomes, costs, and consequences.

In contrast, competitive analysis is a type of comparative analysis in which you deeply research one or more of your industry competitors. In this case, you’re using qualitative research to explore what the competition is up to across one or more dimensions.

For example

Service delivery —metrics like the Net Promoter Scores indicate customer satisfaction levels.

Market position — the share of the market that the competition has captured.

Brand reputation —how well-known or recognized your competitors are within their target market.

  • Tips for optimizing your comparative analysis

Conduct original research

Thorough, independent research is a significant asset when doing comparative analysis. It provides evidence to support your findings and may present a perspective or angle not considered previously. 

Make analysis routine

To get the maximum benefit from comparative research, make it a regular practice, and establish a cadence you can realistically stick to. Some business areas you could plan to analyze regularly include:

Profitability

Competition

Experiment with controlled and uncontrolled variables

In addition to simply comparing and contrasting, explore how different variables might affect your outcomes.

For example, a controllable variable would be offering a seasonal feature like a shopping bot to assist in holiday shopping or raising or lowering the selling price of a product.

Uncontrollable variables include weather, changing regulations, the current political climate, or global pandemics.

Put equal effort into each point of comparison

Most people enter into comparative research with a particular idea or hypothesis already in mind to validate. For instance, you might try to prove the worthwhileness of launching a new service. So, you may be disappointed if your analysis results don’t support your plan.

However, in any comparative analysis, try to maintain an unbiased approach by spending equal time debating the merits and drawbacks of any decision. Ultimately, this will be a practical, more long-term sustainable approach for your business than focusing only on the evidence that favors pursuing your argument or strategy.

Writing a comparative analysis in five steps

To put together a coherent, insightful analysis that goes beyond a list of pros and cons or similarities and differences, try organizing the information into these five components:

1. Frame of reference

Here is where you provide context. First, what driving idea or problem is your research anchored in? Then, for added substance, cite existing research or insights from a subject matter expert, such as a thought leader in marketing, startup growth, or investment

2. Grounds for comparison Why have you chosen to examine the two things you’re analyzing instead of focusing on two entirely different things? What are you hoping to accomplish?

3. Thesis What argument or choice are you advocating for? What will be the before and after effects of going with either decision? What do you anticipate happening with and without this approach?

For example, “If we release an AI feature for our shopping cart, we will have an edge over the rest of the market before the holiday season.” The finished comparative analysis will weigh all the pros and cons of choosing to build the new expensive AI feature including variables like how “intelligent” it will be, what it “pushes” customers to use, how much it takes off the plates of customer service etc.

Ultimately, you will gauge whether building an AI feature is the right plan for your e-commerce shop.

4. Organize the scheme Typically, there are two ways to organize a comparative analysis report. First, you can discuss everything about comparison point “A” and then go into everything about aspect “B.” Or, you alternate back and forth between points “A” and “B,” sometimes referred to as point-by-point analysis.

Using the AI feature as an example again, you could cover all the pros and cons of building the AI feature, then discuss the benefits and drawbacks of building and maintaining the feature. Or you could compare and contrast each aspect of the AI feature, one at a time. For example, a side-by-side comparison of the AI feature to shopping without it, then proceeding to another point of differentiation.

5. Connect the dots Tie it all together in a way that either confirms or disproves your hypothesis.

For instance, “Building the AI bot would allow our customer service team to save 12% on returns in Q3 while offering optimizations and savings in future strategies. However, it would also increase the product development budget by 43% in both Q1 and Q2. Our budget for product development won’t increase again until series 3 of funding is reached, so despite its potential, we will hold off building the bot until funding is secured and more opportunities and benefits can be proved effective.”

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 5 March 2024

Last updated: 25 November 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Comparing and contrasting in an essay | Tips & examples

Comparing and Contrasting in an Essay | Tips & Examples

Published on August 6, 2020 by Jack Caulfield . Revised on July 23, 2023.

Comparing and contrasting is an important skill in academic writing . It involves taking two or more subjects and analyzing the differences and similarities between them.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

When should i compare and contrast, making effective comparisons, comparing and contrasting as a brainstorming tool, structuring your comparisons, other interesting articles, frequently asked questions about comparing and contrasting.

Many assignments will invite you to make comparisons quite explicitly, as in these prompts.

  • Compare the treatment of the theme of beauty in the poetry of William Wordsworth and John Keats.
  • Compare and contrast in-class and distance learning. What are the advantages and disadvantages of each approach?

Some other prompts may not directly ask you to compare and contrast, but present you with a topic where comparing and contrasting could be a good approach.

One way to approach this essay might be to contrast the situation before the Great Depression with the situation during it, to highlight how large a difference it made.

Comparing and contrasting is also used in all kinds of academic contexts where it’s not explicitly prompted. For example, a literature review involves comparing and contrasting different studies on your topic, and an argumentative essay may involve weighing up the pros and cons of different arguments.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

As the name suggests, comparing and contrasting is about identifying both similarities and differences. You might focus on contrasting quite different subjects or comparing subjects with a lot in common—but there must be some grounds for comparison in the first place.

For example, you might contrast French society before and after the French Revolution; you’d likely find many differences, but there would be a valid basis for comparison. However, if you contrasted pre-revolutionary France with Han-dynasty China, your reader might wonder why you chose to compare these two societies.

This is why it’s important to clarify the point of your comparisons by writing a focused thesis statement . Every element of an essay should serve your central argument in some way. Consider what you’re trying to accomplish with any comparisons you make, and be sure to make this clear to the reader.

Comparing and contrasting can be a useful tool to help organize your thoughts before you begin writing any type of academic text. You might use it to compare different theories and approaches you’ve encountered in your preliminary research, for example.

Let’s say your research involves the competing psychological approaches of behaviorism and cognitive psychology. You might make a table to summarize the key differences between them.

Or say you’re writing about the major global conflicts of the twentieth century. You might visualize the key similarities and differences in a Venn diagram.

A Venn diagram showing the similarities and differences between World War I, World War II, and the Cold War.

These visualizations wouldn’t make it into your actual writing, so they don’t have to be very formal in terms of phrasing or presentation. The point of comparing and contrasting at this stage is to help you organize and shape your ideas to aid you in structuring your arguments.

When comparing and contrasting in an essay, there are two main ways to structure your comparisons: the alternating method and the block method.

The alternating method

In the alternating method, you structure your text according to what aspect you’re comparing. You cover both your subjects side by side in terms of a specific point of comparison. Your text is structured like this:

Mouse over the example paragraph below to see how this approach works.

One challenge teachers face is identifying and assisting students who are struggling without disrupting the rest of the class. In a traditional classroom environment, the teacher can easily identify when a student is struggling based on their demeanor in class or simply by regularly checking on students during exercises. They can then offer assistance quietly during the exercise or discuss it further after class. Meanwhile, in a Zoom-based class, the lack of physical presence makes it more difficult to pay attention to individual students’ responses and notice frustrations, and there is less flexibility to speak with students privately to offer assistance. In this case, therefore, the traditional classroom environment holds the advantage, although it appears likely that aiding students in a virtual classroom environment will become easier as the technology, and teachers’ familiarity with it, improves.

The block method

In the block method, you cover each of the overall subjects you’re comparing in a block. You say everything you have to say about your first subject, then discuss your second subject, making comparisons and contrasts back to the things you’ve already said about the first. Your text is structured like this:

  • Point of comparison A
  • Point of comparison B

The most commonly cited advantage of distance learning is the flexibility and accessibility it offers. Rather than being required to travel to a specific location every week (and to live near enough to feasibly do so), students can participate from anywhere with an internet connection. This allows not only for a wider geographical spread of students but for the possibility of studying while travelling. However, distance learning presents its own accessibility challenges; not all students have a stable internet connection and a computer or other device with which to participate in online classes, and less technologically literate students and teachers may struggle with the technical aspects of class participation. Furthermore, discomfort and distractions can hinder an individual student’s ability to engage with the class from home, creating divergent learning experiences for different students. Distance learning, then, seems to improve accessibility in some ways while representing a step backwards in others.

Note that these two methods can be combined; these two example paragraphs could both be part of the same essay, but it’s wise to use an essay outline to plan out which approach you’re taking in each paragraph.

Prevent plagiarism. Run a free check.

If you want to know more about AI tools , college essays , or fallacies make sure to check out some of our other articles with explanations and examples or go directly to our tools!

  • Ad hominem fallacy
  • Post hoc fallacy
  • Appeal to authority fallacy
  • False cause fallacy
  • Sunk cost fallacy

College essays

  • Choosing Essay Topic
  • Write a College Essay
  • Write a Diversity Essay
  • College Essay Format & Structure
  • Comparing and Contrasting in an Essay

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

Some essay prompts include the keywords “compare” and/or “contrast.” In these cases, an essay structured around comparing and contrasting is the appropriate response.

Comparing and contrasting is also a useful approach in all kinds of academic writing : You might compare different studies in a literature review , weigh up different arguments in an argumentative essay , or consider different theoretical approaches in a theoretical framework .

Your subjects might be very different or quite similar, but it’s important that there be meaningful grounds for comparison . You can probably describe many differences between a cat and a bicycle, but there isn’t really any connection between them to justify the comparison.

You’ll have to write a thesis statement explaining the central point you want to make in your essay , so be sure to know in advance what connects your subjects and makes them worth comparing.

Comparisons in essays are generally structured in one of two ways:

  • The alternating method, where you compare your subjects side by side according to one specific aspect at a time.
  • The block method, where you cover each subject separately in its entirety.

It’s also possible to combine both methods, for example by writing a full paragraph on each of your topics and then a final paragraph contrasting the two according to a specific metric.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, July 23). Comparing and Contrasting in an Essay | Tips & Examples. Scribbr. Retrieved April 11, 2024, from https://www.scribbr.com/academic-essay/compare-and-contrast/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, how to write an expository essay, how to write an argumentative essay | examples & tips, academic paragraph structure | step-by-step guide & examples, what is your plagiarism score.

Book cover

Encyclopedia of Quality of Life and Well-Being Research pp 1125–1127 Cite as

Comparative Analysis

  • Sonja Drobnič 3  
  • Reference work entry

1054 Accesses

1 Citations

Context of comparisons ; Radical positivism

The goal of comparative analysis is to search for similarity and variance among units of analysis. Comparative research commonly involves the description and explanation of similarities and differences of conditions or outcomes among large-scale social units, usually regions, nations, societies, and cultures.

Description

In the broadest sense, it is difficult to think of any analysis in the social sciences that is not comparative. In a laboratory experiment, we compare the outcomes for the experimental and control group to ascertain the effects of some experimental stimulus. When we analyze quality of life of men and women, old and young, or rich and poor, we actually perform a comparison of individuals along certain dimensions, such as gender, age, and wealth/income. However, this meaning of comparative analysis is too general to be really useful in research. “Comparative analysis has come to mean the description and...

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Böhnke, P. (2008). Does society matter? Life satisfaction in the enlarged Europe. Social Indicators Research, 87 , 189–210.

Google Scholar  

Esping-Andersen, G. (1999). Social foundations of postindustrial economies . Oxford: Oxford University Press.

Hagerty, M. R., Cummins, R. A., Ferriss, A. L., Land, K., Michalos, A. C., Peterson, M., Sharpe, A., Sirgy, J., & Vogel, J. (2001). Quality of life indexes for national policy: Review and agenda for research. Social Indicators Research, 55 , 1–96.

Kohn, M. L. (1987). Cross-national research as an analytic strategy. American Sociological Review, 52 , 713–731.

Mills, M., van de Bunt, G. G., & de Bruijn, J. (2006). Comparative research. Persistent problems and promising solutions. International Sociology, 21 (5), 619–631.

Ragin, C. C. (1987). The comparative method: Moving beyond qualitative and quantitative strategies . Berkley/Los Angeles: University of California Press.

Smelser, N. J. (2003). On comparative analysis, interdisciplinarity and internationalization in sociology. International Sociology, 18 (4), 643–657.

Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis. An introduction to basic and advanced multilevel modeling . Thousand Oaks, CA: Sage.

Download references

Author information

Authors and affiliations.

Bremen International Graduate School of Social Sciences (BIGSSS), University of Bremen, Wiener Str., FVG, 28359, Bremen, Germany

Sonja Drobnič

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sonja Drobnič .

Editor information

Editors and affiliations.

University of Northern British Columbia, Prince George, BC, Canada

Alex C. Michalos

(residence), Brandon, MB, Canada

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Drobnič, S. (2014). Comparative Analysis. In: Michalos, A.C. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0753-5_492

Download citation

DOI : https://doi.org/10.1007/978-94-007-0753-5_492

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-0752-8

Online ISBN : 978-94-007-0753-5

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

What Is Comparative Analysis and How to Conduct It? (+ Examples)

Appinio Research · 30.10.2023 · 36min read

What Is Comparative Analysis and How to Conduct It Examples

Have you ever faced a complex decision, wondering how to make the best choice among multiple options? In a world filled with data and possibilities, the art of comparative analysis holds the key to unlocking clarity amidst the chaos.

In this guide, we'll demystify the power of comparative analysis, revealing its practical applications, methodologies, and best practices. Whether you're a business leader, researcher, or simply someone seeking to make more informed decisions, join us as we explore the intricacies of comparative analysis and equip you with the tools to chart your course with confidence.

What is Comparative Analysis?

Comparative analysis is a systematic approach used to evaluate and compare two or more entities, variables, or options to identify similarities, differences, and patterns. It involves assessing the strengths, weaknesses, opportunities, and threats associated with each entity or option to make informed decisions.

The primary purpose of comparative analysis is to provide a structured framework for decision-making by:

  • Facilitating Informed Choices: Comparative analysis equips decision-makers with data-driven insights, enabling them to make well-informed choices among multiple options.
  • Identifying Trends and Patterns: It helps identify recurring trends, patterns, and relationships among entities or variables, shedding light on underlying factors influencing outcomes.
  • Supporting Problem Solving: Comparative analysis aids in solving complex problems by systematically breaking them down into manageable components and evaluating potential solutions.
  • Enhancing Transparency: By comparing multiple options, comparative analysis promotes transparency in decision-making processes, allowing stakeholders to understand the rationale behind choices.
  • Mitigating Risks : It helps assess the risks associated with each option, allowing organizations to develop risk mitigation strategies and make risk-aware decisions.
  • Optimizing Resource Allocation: Comparative analysis assists in allocating resources efficiently by identifying areas where resources can be optimized for maximum impact.
  • Driving Continuous Improvement: By comparing current performance with historical data or benchmarks, organizations can identify improvement areas and implement growth strategies.

Importance of Comparative Analysis in Decision-Making

  • Data-Driven Decision-Making: Comparative analysis relies on empirical data and objective evaluation, reducing the influence of biases and subjective judgments in decision-making. It ensures decisions are based on facts and evidence.
  • Objective Assessment: It provides an objective and structured framework for evaluating options, allowing decision-makers to focus on key criteria and avoid making decisions solely based on intuition or preferences.
  • Risk Assessment: Comparative analysis helps assess and quantify risks associated with different options. This risk awareness enables organizations to make proactive risk management decisions.
  • Prioritization: By ranking options based on predefined criteria, comparative analysis enables decision-makers to prioritize actions or investments, directing resources to areas with the most significant impact.
  • Strategic Planning: It is integral to strategic planning, helping organizations align their decisions with overarching goals and objectives. Comparative analysis ensures decisions are consistent with long-term strategies.
  • Resource Allocation: Organizations often have limited resources. Comparative analysis assists in allocating these resources effectively, ensuring they are directed toward initiatives with the highest potential returns.
  • Continuous Improvement: Comparative analysis supports a culture of continuous improvement by identifying areas for enhancement and guiding iterative decision-making processes.
  • Stakeholder Communication: It enhances transparency in decision-making, making it easier to communicate decisions to stakeholders. Stakeholders can better understand the rationale behind choices when supported by comparative analysis.
  • Competitive Advantage: In business and competitive environments , comparative analysis can provide a competitive edge by identifying opportunities to outperform competitors or address weaknesses.
  • Informed Innovation: When evaluating new products , technologies, or strategies, comparative analysis guides the selection of the most promising options, reducing the risk of investing in unsuccessful ventures.

In summary, comparative analysis is a valuable tool that empowers decision-makers across various domains to make informed, data-driven choices, manage risks, allocate resources effectively, and drive continuous improvement. Its structured approach enhances decision quality and transparency, contributing to the success and competitiveness of organizations and research endeavors.

How to Prepare for Comparative Analysis?

1. define objectives and scope.

Before you begin your comparative analysis, clearly defining your objectives and the scope of your analysis is essential. This step lays the foundation for the entire process. Here's how to approach it:

  • Identify Your Goals: Start by asking yourself what you aim to achieve with your comparative analysis. Are you trying to choose between two products for your business? Are you evaluating potential investment opportunities? Knowing your objectives will help you stay focused throughout the analysis.
  • Define Scope: Determine the boundaries of your comparison. What will you include, and what will you exclude? For example, if you're analyzing market entry strategies for a new product, specify whether you're looking at a specific geographic region or a particular target audience.
  • Stakeholder Alignment: Ensure that all stakeholders involved in the analysis understand and agree on the objectives and scope. This alignment will prevent misunderstandings and ensure the analysis meets everyone's expectations.

2. Gather Relevant Data and Information

The quality of your comparative analysis heavily depends on the data and information you gather. Here's how to approach this crucial step:

  • Data Sources: Identify where you'll obtain the necessary data. Will you rely on primary sources , such as surveys and interviews, to collect original data? Or will you use secondary sources, like published research and industry reports, to access existing data? Consider the advantages and disadvantages of each source.
  • Data Collection Plan: Develop a plan for collecting data. This should include details about the methods you'll use, the timeline for data collection, and who will be responsible for gathering the data.
  • Data Relevance: Ensure that the data you collect is directly relevant to your objectives. Irrelevant or extraneous data can lead to confusion and distract from the core analysis.

3. Select Appropriate Criteria for Comparison

Choosing the right criteria for comparison is critical to a successful comparative analysis. Here's how to go about it:

  • Relevance to Objectives: Your chosen criteria should align closely with your analysis objectives. For example, if you're comparing job candidates, your criteria might include skills, experience, and cultural fit.
  • Measurability: Consider whether you can quantify the criteria. Measurable criteria are easier to analyze. If you're comparing marketing campaigns, you might measure criteria like click-through rates, conversion rates, and return on investment.
  • Weighting Criteria : Not all criteria are equally important. You'll need to assign weights to each criterion based on its relative importance. Weighting helps ensure that the most critical factors have a more significant impact on the final decision.

4. Establish a Clear Framework

Once you have your objectives, data, and criteria in place, it's time to establish a clear framework for your comparative analysis. This framework will guide your process and ensure consistency. Here's how to do it:

  • Comparative Matrix: Consider using a comparative matrix or spreadsheet to organize your data. Each row in the matrix represents an option or entity you're comparing, and each column corresponds to a criterion. This visual representation makes it easy to compare and contrast data.
  • Timeline: Determine the time frame for your analysis. Is it a one-time comparison, or will you conduct ongoing analyses? Having a defined timeline helps you manage the analysis process efficiently.
  • Define Metrics: Specify the metrics or scoring system you'll use to evaluate each criterion. For example, if you're comparing potential office locations, you might use a scoring system from 1 to 5 for factors like cost, accessibility, and amenities.

With your objectives, data, criteria, and framework established, you're ready to move on to the next phase of comparative analysis: data collection and organization.

Comparative Analysis Data Collection

Data collection and organization are critical steps in the comparative analysis process. We'll explore how to gather and structure the data you need for a successful analysis.

1. Utilize Primary Data Sources

Primary data sources involve gathering original data directly from the source. This approach offers unique advantages, allowing you to tailor your data collection to your specific research needs.

Some popular primary data sources include:

  • Surveys and Questionnaires: Design surveys or questionnaires and distribute them to collect specific information from individuals or groups. This method is ideal for obtaining firsthand insights, such as customer preferences or employee feedback.
  • Interviews: Conduct structured interviews with relevant stakeholders or experts. Interviews provide an opportunity to delve deeper into subjects and gather qualitative data, making them valuable for in-depth analysis.
  • Observations: Directly observe and record data from real-world events or settings. Observational data can be instrumental in fields like anthropology, ethnography, and environmental studies.
  • Experiments: In controlled environments, experiments allow you to manipulate variables and measure their effects. This method is common in scientific research and product testing.

When using primary data sources, consider factors like sample size, survey design, and data collection methods to ensure the reliability and validity of your data.

2. Harness Secondary Data Sources

Secondary data sources involve using existing data collected by others. These sources can provide a wealth of information and save time and resources compared to primary data collection.

Here are common types of secondary data sources:

  • Public Records: Government publications, census data, and official reports offer valuable information on demographics, economic trends, and public policies. They are often free and readily accessible.
  • Academic Journals: Scholarly articles provide in-depth research findings across various disciplines. They are helpful for accessing peer-reviewed studies and staying current with academic discourse.
  • Industry Reports: Industry-specific reports and market research publications offer insights into market trends, consumer behavior, and competitive landscapes. They are essential for businesses making strategic decisions.
  • Online Databases: Online platforms like Statista , PubMed , and Google Scholar provide a vast repository of data and research articles. They offer search capabilities and access to a wide range of data sets.

When using secondary data sources, critically assess the credibility, relevance, and timeliness of the data. Ensure that it aligns with your research objectives.

3. Ensure and Validate Data Quality

Data quality is paramount in comparative analysis. Poor-quality data can lead to inaccurate conclusions and flawed decision-making. Here's how to ensure data validation and reliability:

  • Cross-Verification: Whenever possible, cross-verify data from multiple sources. Consistency among different sources enhances the reliability of the data.
  • Sample Size: Ensure that your data sample size is statistically significant for meaningful analysis. A small sample may not accurately represent the population.
  • Data Integrity: Check for data integrity issues, such as missing values, outliers, or duplicate entries. Address these issues before analysis to maintain data quality.
  • Data Source Reliability: Assess the reliability and credibility of the data sources themselves. Consider factors like the reputation of the institution or organization providing the data.

4. Organize Data Effectively

Structuring your data for comparison is a critical step in the analysis process. Organized data makes it easier to draw insights and make informed decisions. Here's how to structure data effectively:

  • Data Cleaning: Before analysis, clean your data to remove inconsistencies, errors, and irrelevant information. Data cleaning may involve data transformation, imputation of missing values, and removing outliers.
  • Normalization: Standardize data to ensure fair comparisons. Normalization adjusts data to a standard scale, making comparing variables with different units or ranges possible.
  • Variable Labeling: Clearly label variables and data points for easy identification. Proper labeling enhances the transparency and understandability of your analysis.
  • Data Organization: Organize data into a format that suits your analysis methods. For quantitative analysis, this might mean creating a matrix, while qualitative analysis may involve categorizing data into themes.

By paying careful attention to data collection, validation, and organization, you'll set the stage for a robust and insightful comparative analysis. Next, we'll explore various methodologies you can employ in your analysis, ranging from qualitative approaches to quantitative methods and examples.

Comparative Analysis Methods

When it comes to comparative analysis, various methodologies are available, each suited to different research goals and data types. In this section, we'll explore five prominent methodologies in detail.

Qualitative Comparative Analysis (QCA)

Qualitative Comparative Analysis (QCA) is a methodology often used when dealing with complex, non-linear relationships among variables. It seeks to identify patterns and configurations among factors that lead to specific outcomes.

  • Case-by-Case Analysis: QCA involves evaluating individual cases (e.g., organizations, regions, or events) rather than analyzing aggregate data. Each case's unique characteristics are considered.
  • Boolean Logic: QCA employs Boolean algebra to analyze data. Variables are categorized as either present or absent, allowing for the examination of different combinations and logical relationships.
  • Necessary and Sufficient Conditions: QCA aims to identify necessary and sufficient conditions for a specific outcome to occur. It helps answer questions like, "What conditions are necessary for a successful product launch?"
  • Fuzzy Set Theory: In some cases, QCA may use fuzzy set theory to account for degrees of membership in a category, allowing for more nuanced analysis.

QCA is particularly useful in fields such as sociology, political science, and organizational studies, where understanding complex interactions is essential.

Quantitative Comparative Analysis

Quantitative Comparative Analysis involves the use of numerical data and statistical techniques to compare and analyze variables. It's suitable for situations where data is quantitative, and relationships can be expressed numerically.

  • Statistical Tools: Quantitative comparative analysis relies on statistical methods like regression analysis, correlation, and hypothesis testing. These tools help identify relationships, dependencies, and trends within datasets.
  • Data Measurement: Ensure that variables are measured consistently using appropriate scales (e.g., ordinal, interval, ratio) for meaningful analysis. Variables may include numerical values like revenue, customer satisfaction scores, or product performance metrics.
  • Data Visualization: Create visual representations of data using charts, graphs, and plots. Visualization aids in understanding complex relationships and presenting findings effectively.
  • Statistical Significance: Assess the statistical significance of relationships. Statistical significance indicates whether observed differences or relationships are likely to be real rather than due to chance.

Quantitative comparative analysis is commonly applied in economics, social sciences, and market research to draw empirical conclusions from numerical data.

Case Studies

Case studies involve in-depth examinations of specific instances or cases to gain insights into real-world scenarios. Comparative case studies allow researchers to compare and contrast multiple cases to identify patterns, differences, and lessons.

  • Narrative Analysis: Case studies often involve narrative analysis, where researchers construct detailed narratives of each case, including context, events, and outcomes.
  • Contextual Understanding: In comparative case studies, it's crucial to consider the context within which each case operates. Understanding the context helps interpret findings accurately.
  • Cross-Case Analysis: Researchers conduct cross-case analysis to identify commonalities and differences across cases. This process can lead to the discovery of factors that influence outcomes.
  • Triangulation: To enhance the validity of findings, researchers may use multiple data sources and methods to triangulate information and ensure reliability.

Case studies are prevalent in fields like psychology, business, and sociology, where deep insights into specific situations are valuable.

SWOT Analysis

SWOT Analysis is a strategic tool used to assess the Strengths, Weaknesses, Opportunities, and Threats associated with a particular entity or situation. While it's commonly used in business, it can be adapted for various comparative analyses.

  • Internal and External Factors: SWOT Analysis examines both internal factors (Strengths and Weaknesses), such as organizational capabilities, and external factors (Opportunities and Threats), such as market conditions and competition.
  • Strategic Planning: The insights from SWOT Analysis inform strategic decision-making. By identifying strengths and opportunities, organizations can leverage their advantages. Likewise, addressing weaknesses and threats helps mitigate risks.
  • Visual Representation: SWOT Analysis is often presented as a matrix or a 2x2 grid, making it visually accessible and easy to communicate to stakeholders.
  • Continuous Monitoring: SWOT Analysis is not a one-time exercise. Organizations use it periodically to adapt to changing circumstances and make informed decisions.

SWOT Analysis is versatile and can be applied in business, healthcare, education, and any context where a structured assessment of factors is needed.

Benchmarking

Benchmarking involves comparing an entity's performance, processes, or practices to those of industry leaders or best-in-class organizations. It's a powerful tool for continuous improvement and competitive analysis.

  • Identify Performance Gaps: Benchmarking helps identify areas where an entity lags behind its peers or industry standards. These performance gaps highlight opportunities for improvement.
  • Data Collection: Gather data on key performance metrics from both internal and external sources. This data collection phase is crucial for meaningful comparisons.
  • Comparative Analysis: Compare your organization's performance data with that of benchmark organizations. This analysis can reveal where you excel and where adjustments are needed.
  • Continuous Improvement: Benchmarking is a dynamic process that encourages continuous improvement. Organizations use benchmarking findings to set performance goals and refine their strategies.

Benchmarking is widely used in business, manufacturing, healthcare, and customer service to drive excellence and competitiveness.

Each of these methodologies brings a unique perspective to comparative analysis, allowing you to choose the one that best aligns with your research objectives and the nature of your data. The choice between qualitative and quantitative methods, or a combination of both, depends on the complexity of the analysis and the questions you seek to answer.

How to Conduct Comparative Analysis?

Once you've prepared your data and chosen an appropriate methodology, it's time to dive into the process of conducting a comparative analysis. We will guide you through the essential steps to extract meaningful insights from your data.

What Is Comparative Analysis and How to Conduct It Examples

1. Identify Key Variables and Metrics

Identifying key variables and metrics is the first crucial step in conducting a comparative analysis. These are the factors or indicators you'll use to assess and compare your options.

  • Relevance to Objectives: Ensure the chosen variables and metrics align closely with your analysis objectives. When comparing marketing strategies, relevant metrics might include customer acquisition cost, conversion rate, and retention.
  • Quantitative vs. Qualitative : Decide whether your analysis will focus on quantitative data (numbers) or qualitative data (descriptive information). In some cases, a combination of both may be appropriate.
  • Data Availability: Consider the availability of data. Ensure you can access reliable and up-to-date data for all selected variables and metrics.
  • KPIs: Key Performance Indicators (KPIs) are often used as the primary metrics in comparative analysis. These are metrics that directly relate to your goals and objectives.

2. Visualize Data for Clarity

Data visualization techniques play a vital role in making complex information more accessible and understandable. Effective data visualization allows you to convey insights and patterns to stakeholders. Consider the following approaches:

  • Charts and Graphs: Use various types of charts, such as bar charts, line graphs, and pie charts, to represent data. For example, a line graph can illustrate trends over time, while a bar chart can compare values across categories.
  • Heatmaps: Heatmaps are particularly useful for visualizing large datasets and identifying patterns through color-coding. They can reveal correlations, concentrations, and outliers.
  • Scatter Plots: Scatter plots help visualize relationships between two variables. They are especially useful for identifying trends, clusters, or outliers.
  • Dashboards: Create interactive dashboards that allow users to explore data and customize views. Dashboards are valuable for ongoing analysis and reporting.
  • Infographics: For presentations and reports, consider using infographics to summarize key findings in a visually engaging format.

Effective data visualization not only enhances understanding but also aids in decision-making by providing clear insights at a glance.

3. Establish Clear Comparative Frameworks

A well-structured comparative framework provides a systematic approach to your analysis. It ensures consistency and enables you to make meaningful comparisons. Here's how to create one:

  • Comparison Matrices: Consider using matrices or spreadsheets to organize your data. Each row represents an option or entity, and each column corresponds to a variable or metric. This matrix format allows for side-by-side comparisons.
  • Decision Trees: In complex decision-making scenarios, decision trees help map out possible outcomes based on different criteria and variables. They visualize the decision-making process.
  • Scenario Analysis: Explore different scenarios by altering variables or criteria to understand how changes impact outcomes. Scenario analysis is valuable for risk assessment and planning.
  • Checklists: Develop checklists or scoring sheets to systematically evaluate each option against predefined criteria. Checklists ensure that no essential factors are overlooked.

A well-structured comparative framework simplifies the analysis process, making it easier to draw meaningful conclusions and make informed decisions.

4. Evaluate and Score Criteria

Evaluating and scoring criteria is a critical step in comparative analysis, as it quantifies the performance of each option against the chosen criteria.

  • Scoring System: Define a scoring system that assigns values to each criterion for every option. Common scoring systems include numerical scales, percentage scores, or qualitative ratings (e.g., high, medium, low).
  • Consistency: Ensure consistency in scoring by defining clear guidelines for each score. Provide examples or descriptions to help evaluators understand what each score represents.
  • Data Collection: Collect data or information relevant to each criterion for all options. This may involve quantitative data (e.g., sales figures) or qualitative data (e.g., customer feedback).
  • Aggregation: Aggregate the scores for each option to obtain an overall evaluation. This can be done by summing the individual criterion scores or applying weighted averages.
  • Normalization: If your criteria have different measurement scales or units, consider normalizing the scores to create a level playing field for comparison.

5. Assign Importance to Criteria

Not all criteria are equally important in a comparative analysis. Weighting criteria allows you to reflect their relative significance in the final decision-making process.

  • Relative Importance: Assess the importance of each criterion in achieving your objectives. Criteria directly aligned with your goals may receive higher weights.
  • Weighting Methods: Choose a weighting method that suits your analysis. Common methods include expert judgment, analytic hierarchy process (AHP), or data-driven approaches based on historical performance.
  • Impact Analysis: Consider how changes in the weights assigned to criteria would affect the final outcome. This sensitivity analysis helps you understand the robustness of your decisions.
  • Stakeholder Input: Involve relevant stakeholders or decision-makers in the weighting process. Their input can provide valuable insights and ensure alignment with organizational goals.
  • Transparency: Clearly document the rationale behind the assigned weights to maintain transparency in your analysis.

By weighting criteria, you ensure that the most critical factors have a more significant influence on the final evaluation, aligning the analysis more closely with your objectives and priorities.

With these steps in place, you're well-prepared to conduct a comprehensive comparative analysis. The next phase involves interpreting your findings, drawing conclusions, and making informed decisions based on the insights you've gained.

Comparative Analysis Interpretation

Interpreting the results of your comparative analysis is a crucial phase that transforms data into actionable insights. We'll delve into various aspects of interpretation and how to make sense of your findings.

  • Contextual Understanding: Before diving into the data, consider the broader context of your analysis. Understand the industry trends, market conditions, and any external factors that may have influenced your results.
  • Drawing Conclusions: Summarize your findings clearly and concisely. Identify trends, patterns, and significant differences among the options or variables you've compared.
  • Quantitative vs. Qualitative Analysis: Depending on the nature of your data and analysis, you may need to balance both quantitative and qualitative interpretations. Qualitative insights can provide context and nuance to quantitative findings.
  • Comparative Visualization: Visual aids such as charts, graphs, and tables can help convey your conclusions effectively. Choose visual representations that align with the nature of your data and the key points you want to emphasize.
  • Outliers and Anomalies: Identify and explain any outliers or anomalies in your data. Understanding these exceptions can provide valuable insights into unusual cases or factors affecting your analysis.
  • Cross-Validation: Validate your conclusions by comparing them with external benchmarks, industry standards, or expert opinions. Cross-validation helps ensure the reliability of your findings.
  • Implications for Decision-Making: Discuss how your analysis informs decision-making. Clearly articulate the practical implications of your findings and their relevance to your initial objectives.
  • Actionable Insights: Emphasize actionable insights that can guide future strategies, policies, or actions. Make recommendations based on your analysis, highlighting the steps needed to capitalize on strengths or address weaknesses.
  • Continuous Improvement: Encourage a culture of continuous improvement by using your analysis as a feedback mechanism. Suggest ways to monitor and adapt strategies over time based on evolving circumstances.

Comparative Analysis Applications

Comparative analysis is a versatile methodology that finds application in various fields and scenarios. Let's explore some of the most common and impactful applications.

Business Decision-Making

Comparative analysis is widely employed in business to inform strategic decisions and drive success. Key applications include:

Market Research and Competitive Analysis

  • Objective: To assess market opportunities and evaluate competitors.
  • Methods: Analyzing market trends, customer preferences, competitor strengths and weaknesses, and market share.
  • Outcome: Informed product development, pricing strategies, and market entry decisions.

Product Comparison and Benchmarking

  • Objective: To compare the performance and features of products or services.
  • Methods: Evaluating product specifications, customer reviews, and pricing.
  • Outcome: Identifying strengths and weaknesses, improving product quality, and setting competitive pricing.

Financial Analysis

  • Objective: To evaluate financial performance and make investment decisions.
  • Methods: Comparing financial statements, ratios, and performance indicators of companies.
  • Outcome: Informed investment choices, risk assessment, and portfolio management.

Healthcare and Medical Research

In the healthcare and medical research fields, comparative analysis is instrumental in understanding diseases, treatment options, and healthcare systems.

Clinical Trials and Drug Development

  • Objective: To compare the effectiveness of different treatments or drugs.
  • Methods: Analyzing clinical trial data, patient outcomes, and side effects.
  • Outcome: Informed decisions about drug approvals, treatment protocols, and patient care.

Health Outcomes Research

  • Objective: To assess the impact of healthcare interventions.
  • Methods: Comparing patient health outcomes before and after treatment or between different treatment approaches.
  • Outcome: Improved healthcare guidelines, cost-effectiveness analysis, and patient care plans.

Healthcare Systems Evaluation

  • Objective: To assess the performance of healthcare systems.
  • Methods: Comparing healthcare delivery models, patient satisfaction, and healthcare costs.
  • Outcome: Informed healthcare policy decisions, resource allocation, and system improvements.

Social Sciences and Policy Analysis

Comparative analysis is a fundamental tool in social sciences and policy analysis, aiding in understanding complex societal issues.

Educational Research

  • Objective: To compare educational systems and practices.
  • Methods: Analyzing student performance, curriculum effectiveness, and teaching methods.
  • Outcome: Informed educational policies, curriculum development, and school improvement strategies.

Political Science

  • Objective: To study political systems, elections, and governance.
  • Methods: Comparing election outcomes, policy impacts, and government structures.
  • Outcome: Insights into political behavior, policy effectiveness, and governance reforms.

Social Welfare and Poverty Analysis

  • Objective: To evaluate the impact of social programs and policies.
  • Methods: Comparing the well-being of individuals or communities with and without access to social assistance.
  • Outcome: Informed policymaking, poverty reduction strategies, and social program improvements.

Environmental Science and Sustainability

Comparative analysis plays a pivotal role in understanding environmental issues and promoting sustainability.

Environmental Impact Assessment

  • Objective: To assess the environmental consequences of projects or policies.
  • Methods: Comparing ecological data, resource use, and pollution levels.
  • Outcome: Informed environmental mitigation strategies, sustainable development plans, and regulatory decisions.

Climate Change Analysis

  • Objective: To study climate patterns and their impacts.
  • Methods: Comparing historical climate data, temperature trends, and greenhouse gas emissions.
  • Outcome: Insights into climate change causes, adaptation strategies, and policy recommendations.

Ecosystem Health Assessment

  • Objective: To evaluate the health and resilience of ecosystems.
  • Methods: Comparing biodiversity, habitat conditions, and ecosystem services.
  • Outcome: Conservation efforts, restoration plans, and ecological sustainability measures.

Technology and Innovation

Comparative analysis is crucial in the fast-paced world of technology and innovation.

Product Development and Innovation

  • Objective: To assess the competitiveness and innovation potential of products or technologies.
  • Methods: Comparing research and development investments, technology features, and market demand.
  • Outcome: Informed innovation strategies, product roadmaps, and patent decisions.

User Experience and Usability Testing

  • Objective: To evaluate the user-friendliness of software applications or digital products.
  • Methods: Comparing user feedback, usability metrics, and user interface designs.
  • Outcome: Improved user experiences, interface redesigns, and product enhancements.

Technology Adoption and Market Entry

  • Objective: To analyze market readiness and risks for new technologies.
  • Methods: Comparing market conditions, regulatory landscapes, and potential barriers.
  • Outcome: Informed market entry strategies, risk assessments, and investment decisions.

These diverse applications of comparative analysis highlight its flexibility and importance in decision-making across various domains. Whether in business, healthcare, social sciences, environmental studies, or technology, comparative analysis empowers researchers and decision-makers to make informed choices and drive positive outcomes.

Comparative Analysis Best Practices

Successful comparative analysis relies on following best practices and avoiding common pitfalls. Implementing these practices enhances the effectiveness and reliability of your analysis.

  • Clearly Defined Objectives: Start with well-defined objectives that outline what you aim to achieve through the analysis. Clear objectives provide focus and direction.
  • Data Quality Assurance: Ensure data quality by validating, cleaning, and normalizing your data. Poor-quality data can lead to inaccurate conclusions.
  • Transparent Methodologies: Clearly explain the methodologies and techniques you've used for analysis. Transparency builds trust and allows others to assess the validity of your approach.
  • Consistent Criteria: Maintain consistency in your criteria and metrics across all options or variables. Inconsistent criteria can lead to biased results.
  • Sensitivity Analysis: Conduct sensitivity analysis by varying key parameters, such as weights or assumptions, to assess the robustness of your conclusions.
  • Stakeholder Involvement: Involve relevant stakeholders throughout the analysis process. Their input can provide valuable perspectives and ensure alignment with organizational goals.
  • Critical Evaluation of Assumptions: Identify and critically evaluate any assumptions made during the analysis. Assumptions should be explicit and justifiable.
  • Holistic View: Take a holistic view of the analysis by considering both short-term and long-term implications. Avoid focusing solely on immediate outcomes.
  • Documentation: Maintain thorough documentation of your analysis, including data sources, calculations, and decision criteria. Documentation supports transparency and facilitates reproducibility.
  • Continuous Learning: Stay updated with the latest analytical techniques, tools, and industry trends. Continuous learning helps you adapt your analysis to changing circumstances.
  • Peer Review: Seek peer review or expert feedback on your analysis. External perspectives can identify blind spots and enhance the quality of your work.
  • Ethical Considerations: Address ethical considerations, such as privacy and data protection, especially when dealing with sensitive or personal data.

By adhering to these best practices, you'll not only improve the rigor of your comparative analysis but also ensure that your findings are reliable, actionable, and aligned with your objectives.

Comparative Analysis Examples

To illustrate the practical application and benefits of comparative analysis, let's explore several real-world examples across different domains. These examples showcase how organizations and researchers leverage comparative analysis to make informed decisions, solve complex problems, and drive improvements:

Retail Industry - Price Competitiveness Analysis

Objective: A retail chain aims to assess its price competitiveness against competitors in the same market.

Methodology:

  • Collect pricing data for a range of products offered by the retail chain and its competitors.
  • Organize the data into a comparative framework, categorizing products by type and price range.
  • Calculate price differentials, averages, and percentiles for each product category.
  • Analyze the findings to identify areas where the retail chain's prices are higher or lower than competitors.

Outcome: The analysis reveals that the retail chain's prices are consistently lower in certain product categories but higher in others. This insight informs pricing strategies, allowing the retailer to adjust prices to remain competitive in the market.

Healthcare - Comparative Effectiveness Research

Objective: Researchers aim to compare the effectiveness of two different treatment methods for a specific medical condition.

  • Recruit patients with the medical condition and randomly assign them to two treatment groups.
  • Collect data on treatment outcomes, including symptom relief, side effects, and recovery times.
  • Analyze the data using statistical methods to compare the treatment groups.
  • Consider factors like patient demographics and baseline health status as potential confounding variables.

Outcome: The comparative analysis reveals that one treatment method is statistically more effective than the other in relieving symptoms and has fewer side effects. This information guides medical professionals in recommending the more effective treatment to patients.

Environmental Science - Carbon Emission Analysis

Objective: An environmental organization seeks to compare carbon emissions from various transportation modes in a metropolitan area.

  • Collect data on the number of vehicles, their types (e.g., cars, buses, bicycles), and fuel consumption for each mode of transportation.
  • Calculate the total carbon emissions for each mode based on fuel consumption and emission factors.
  • Create visualizations such as bar charts and pie charts to represent the emissions from each transportation mode.
  • Consider factors like travel distance, occupancy rates, and the availability of alternative fuels.

Outcome: The comparative analysis reveals that public transportation generates significantly lower carbon emissions per passenger mile compared to individual car travel. This information supports advocacy for increased public transit usage to reduce carbon footprint.

Technology Industry - Feature Comparison for Software Development Tools

Objective: A software development team needs to choose the most suitable development tool for an upcoming project.

  • Create a list of essential features and capabilities required for the project.
  • Research and compile information on available development tools in the market.
  • Develop a comparative matrix or scoring system to evaluate each tool's features against the project requirements.
  • Assign weights to features based on their importance to the project.

Outcome: The comparative analysis highlights that Tool A excels in essential features critical to the project, such as version control integration and debugging capabilities. The development team selects Tool A as the preferred choice for the project.

Educational Research - Comparative Study of Teaching Methods

Objective: A school district aims to improve student performance by comparing the effectiveness of traditional classroom teaching with online learning.

  • Randomly assign students to two groups: one taught using traditional methods and the other through online courses.
  • Administer pre- and post-course assessments to measure knowledge gain.
  • Collect feedback from students and teachers on the learning experiences.
  • Analyze assessment scores and feedback to compare the effectiveness and satisfaction levels of both teaching methods.

Outcome: The comparative analysis reveals that online learning leads to similar knowledge gains as traditional classroom teaching. However, students report higher satisfaction and flexibility with the online approach. The school district considers incorporating online elements into its curriculum.

These examples illustrate the diverse applications of comparative analysis across industries and research domains. Whether optimizing pricing strategies in retail, evaluating treatment effectiveness in healthcare, assessing environmental impacts, choosing the right software tool, or improving educational methods, comparative analysis empowers decision-makers with valuable insights for informed choices and positive outcomes.

Conclusion for Comparative Analysis

Comparative analysis is your compass in the world of decision-making. It helps you see the bigger picture, spot opportunities, and navigate challenges. By defining your objectives, gathering data, applying methodologies, and following best practices, you can harness the power of Comparative Analysis to make informed choices and drive positive outcomes.

Remember, Comparative analysis is not just a tool; it's a mindset that empowers you to transform data into insights and uncertainty into clarity. So, whether you're steering a business, conducting research, or facing life's choices, embrace Comparative Analysis as your trusted guide on the journey to better decisions. With it, you can chart your course, make impactful choices, and set sail toward success.

How to Conduct Comparative Analysis in Minutes?

Are you ready to revolutionize your approach to market research and comparative analysis? Appinio , a real-time market research platform, empowers you to harness the power of real-time consumer insights for swift, data-driven decisions. Here's why you should choose Appinio:

  • Speedy Insights:  Get from questions to insights in minutes, enabling you to conduct comparative analysis without delay.
  • User-Friendly:  No need for a PhD in research – our intuitive platform is designed for everyone, making it easy to collect and analyze data.
  • Global Reach:  With access to over 90 countries and the ability to define your target group from 1200+ characteristics, Appinio provides a worldwide perspective for your comparative analysis

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

What is Data Analysis Definition Tools Examples

11.04.2024 | 34min read

What is Data Analysis? Definition, Tools, Examples

What is a Confidence Interval and How to Calculate It

09.04.2024 | 29min read

What is a Confidence Interval and How to Calculate It?

What is Field Research Definition Types Methods Examples

05.04.2024 | 28min read

What is Field Research? Definition, Types, Methods, Examples

comparative analysis of research papers

  • Master Your Homework
  • Do My Homework

A Comparative Analysis: Research Paper vs. Proposal

This article provides a comparative analysis of research paper writing and proposal writing. It explores the similarities and differences between these two forms of academic documents, aiming to explain how they are both distinct yet interrelated. Moreover, it will provide insight into the process by which each document is developed as well as their individual functions in relation to one another. Through this exploration, readers should gain an understanding of not only what makes up a successful research paper or proposal but also why it is necessary to be mindful of their respective distinctions when engaging in either type of work.

I. Introduction

Ii. the nature of a research paper and proposal, iii. differing purposes in writing assignments, iv. structural differences between the two genres, v. strategies for composing each piece, vi. guidelines to consider when comparing quality levels of papers and proposals, vii. conclusion.

When beginning research, it is important to understand the difference between a research paper and a research proposal . Both require an in-depth study of different topics or aspects related to one’s field of expertise. The two differ significantly in their purpose as well as structure.

  • A research paper : A research paper is meant to present facts about an issue with supportive evidence from reliable sources such as peer reviewed journals, books, magazines and more.
  • A research proposal : On the other hand, a research proposal outlines potential solutions for an identified problem or challenges experienced by others in the same field. It usually involves conducting experiments which are aimed at testing hypotheses that have been established priorly.

Research papers and research proposals are both essential elements of scholarly work, yet they have some key differences. Both involve writing with an academic purpose but differ in scope and intent.

  • A research paper is a longer form piece of writing that dives into the specifics surrounding the topic you’ve chosen to explore. It typically consists of several sections including literature review, methodology, data analysis/discussion, results section.
  • They take considerable amounts of time to create as there is often extensive reading required on a given subject prior to beginning your own exploration or argument.

As students progress through their academic career, the types of writing assignments that they encounter differ significantly in purpose. In general terms, there are two main categories: research papers and research proposals.

Research papers are focused on finding answers to questions or solving problems related to a given subject matter. Students conduct extensive library and internet research while constructing an argument supported by evidence found during their investigations. The goal is for the student to demonstrate knowledge of what has been studied as well as offer new perspectives on potential solutions to identified issues or furthering current understanding of topics explored.

When it comes to writing, there are two main genres that serve different purposes: research papers and research proposals. Although similar in nature, they also have notable differences in structure.

  • Research Paper : The purpose of a research paper is to present an idea or argument. It should be written with clarity and rigor so the readers can understand the concept being presented. A typical research paper includes five sections – introduction, literature review, methodology, results/analysis & discussion – each having its own particular structural elements.
  • Research Proposal : By contrast, a research proposal’s primary objective is to win approval for further study; thus its focus tends to be more persuasive than explanatory in nature. Research proposals typically include four parts – statement of problem/goal definition & importance; overview of existing literature on topic; proposed solution (including objectives); potential outcomes & contribution(s) — all intended to convince grant-makers or other authority figures as why this project deserves their support.

As students compose each of the two major pieces required for a research project – the research paper and the research proposal , there are some strategies they can use to ensure an effective outcome.

  • Research Paper:

When writing their research paper, it is important that students understand how to properly format their document with regards to length, citation style and structure. Additionally, critical thinking skills should be employed as well as ample amount of proof-reading time set aside in order to identify any errors which may have been missed along the way.

  • Research Proposal:

The process of creating a research proposal is quite different from that of composing a traditional essay or term paper; therefore special care must be taken when constructing this particular piece. As opposed to only relying on knowledge learned in class or from textbooks, researching existing literature related to one’s chosen topic is highly beneficial if not necessary when preparing such an assignment. After compiling relevant information via both primary and secondary sources , synthesizing these ideas into an original approach should also help elevate its quality.

Making Comparative Judgements: When it comes to making a comparison between two or more papers and proposals, there are certain factors that should be taken into consideration. Whether they involve research papers or research proposals, the following guidelines can help academics assess their quality levels with greater accuracy:

  • Readability – A paper or proposal may contain complex ideas but still remain accessible enough for any reader to understand.
  • Originality – Look out for those items which stand apart from the rest because of innovative approaches and new perspectives.

In conclusion, the differences between a research paper and a research proposal are numerous. A research paper is an academic document that presents independent research conducted by students or scholars in any particular field of study. It typically contains original findings, review of existing literature on the topic, and analyses based on collected data as well as evidence from other reliable sources.

On the other hand, a research proposal outlines an idea for future scientific investigation or experiment. Generally speaking, this document serves to get permission and/or funding from relevant governing bodies before researchers can begin their work. Research proposals outline plans for data collection methods such as interviews or surveys; they also discuss hypotheses related to expected outcomes in relation to current knowledge within their respective fields. Furthermore, these documents often include timeline projections with expected completion dates for each phase of work along with proposed budgets detailing expenses necessary for successful implementation of research projects.

  • Overall, while there are many similarities between them both serve unique purposes.
  • Research papers reflect upon past discoveries through analysis but proposals identify new areas where further exploration may be warranted.

English: In conclusion, this comparative analysis has illustrated the differences between a research paper and proposal. It is evident that while they share similarities in terms of structure, format, style, topics covered and purpose; each one also holds distinct attributes from its counterpart. To ensure success when completing either type of document it is imperative to consider their respective criteria as well as any additional requirements imposed by specific academic institutions or employers.

  • Generating Ideas
  • Drafting and Revision
  • Sources and Evidence
  • Style and Grammar
  • Specific to Creative Arts
  • Specific to Humanities
  • Specific to Sciences
  • Specific to Social Sciences
  • CVs, Résumés and Cover Letters
  • Graduate School Applications
  • Other Resources
  • Hiatt Career Center
  • University Writing Center
  • Classroom Materials
  • Course and Assignment Design
  • UWP Instructor Resources
  • Writing Intensive Requirement
  • Criteria and Learning Goals
  • Course Application for Instructors
  • FAQ for Instructors
  • FAQ for Students
  • Journals on Writing Research and Pedagogy
  • University Writing Program
  • Degree Programs
  • Majors and Minors
  • Graduate Programs
  • The Brandeis Core
  • School of Arts and Sciences
  • Brandeis Online
  • Brandeis International Business School
  • Graduate School of Arts and Sciences
  • Heller School for Social Policy and Management
  • Rabb School of Continuing Studies
  • Precollege Programs
  • Faculty and Researcher Directory
  • Brandeis Library
  • Academic Calendar
  • Undergraduate Admissions
  • Summer School
  • Financial Aid
  • Research that Matters
  • Resources for Researchers
  • Brandeis Researchers in the News
  • Provost Research Grants
  • Recent Awards
  • Faculty Research
  • Student Research
  • Centers and Institutes
  • Office of the Vice Provost for Research
  • Office of the Provost
  • Housing/Community Living
  • Campus Calendar
  • Student Engagement
  • Clubs and Organizations
  • Community Service
  • Dean of Students Office
  • Orientation
  • Spiritual Life
  • Graduate Student Affairs
  • Directory of Campus Contacts
  • Division of Creative Arts
  • Brandeis Arts Engagement
  • Rose Art Museum
  • Bernstein Festival of the Creative Arts
  • Theater Arts Productions
  • Brandeis Concert Series
  • Public Sculpture at Brandeis
  • Women's Studies Research Center
  • Creative Arts Award
  • Our Jewish Roots
  • The Framework for the Future
  • Mission and Diversity Statements
  • Distinguished Faculty
  • Nobel Prize 2017
  • Notable Alumni
  • Administration
  • Working at Brandeis
  • Commencement
  • Offices Directory
  • Faculty & Staff
  • Alumni & Friends
  • Parents & Families
  • 75th Anniversary
  • New Students
  • Shuttle Schedules
  • Support at Brandeis

Writing Resources

Comparative genre analysis.

In the University Writing Seminar, the Comparative Genre Analysis (CGA) unit asks students to read writing from varying disciplines. The goal of the CGA is to prepare students for writing in their courses across the disciplines, as well as in their future careers. The CGA acts as an important introduction to the fact that, while elements of writing (e.g., evidence, motive) exist in all disciplines and genres, these elements often look different.

Over a 2-3 class sequence, students work independently and in groups to identify how writing across the disciplines varies and is similar in content, style, and organization. Instructors select four academic articles, typically one from the humanities, one from the sciences, and two from the social sciences (ideally one more humanistic social science, such as cultural anthropology or history, and one more quantitative social science, such as sociology or economics). These articles become the foundation for observing similarities in differences in writing. Class discussion on the final day of the CGA highlights not just HOW academic writing varies but WHY this variation exists.

At the end of the CGA class sequence, as well as at the end of the semester, students are asked to write reflections on what they have learned about writing across the disciplines and about what this might mean for them in future courses. Student reflections suggest that the CGA is effective in beginning the conversation about how writing is similar and different across the disciplines ( Sample Student Reflection below).

What does this mean for Writing Intensive (WI) classes?

  • Your students know that the writing in your discipline may be different in some/many ways from the writing they did in UWS or in previous courses.
  • Your students have the tools to start to predict how writing in a new discipline may be different. Because they understand, for example, why name/date citations are used in one discipline and name/page number citations are used in another, they can anticipate what a new discipline will require.
  • If you identify a writing convention in your discipline, students should be able to fit this into the larger conversation around writing similarities and differences that they participated in during UWS.
  • Students have discussed and reflected on what questions they might need to ask their professors / teaching assistants when writing in a new discipline (e.g., What citation style should be used? Is the first person allowed?).
  • WI instructors can facilitate the writing process for students by 1) identifying what element of writing they are discussing, using UWS language, and 2) explicitly describing the disciplinary-specific expectations for this element and reiterating why this convention is used.   
Sample Student Reflection  I believe the absolute most important thing a writer should consider is their intended audience. Whichever discipline the essay is for they all have a distinct and diverse readership. This readership cannot be overlooked when creating an argument. In future writing, no matter the subject, I will try to build a firm foundation of the writing in a discipline. By reading academic articles from respected sources, it will allow me to grasp the different ways the argument is presented in that particular field. I would also open a discussion with my class, colleagues, or teacher about the different ways they are approaching this discipline. When presenting their ideas in writing, I think the most important thing for scholars to consider is the audience who they are presenting it to. When writing, audience can affect multiple facets of any article or paper. For example, when presenting to a scientific audience, technical language can be used and an expectation for certain background knowledge can be considered. However, when presenting to a broader audience such as a public awareness piece, the writer may choose to use language that isn’t quite so technical and complex, making the paper more accessible to its desired audience. The CGA exercise has made me more aware of the types of data used in different disciplines when writing. For example, scientific pieces do not usually use many quotes because it’s not so much what was said that was important, rather the actual conclusion that can be drawn from the data. In more humanities focused pieces, quotes can play a major role in the focus of the paper while numerical data and experiments may not. The CGA exercise also made me more aware of how different disciplines require varying levels of formality in the presentation of ideas or information and differences in determining when to use a thesis or a hypothesis. Although we only examined four disciplines within our analysis, as I approach other disciples in the future, I feel that this exercise has given me a good base. I know that in the future I will approach the writing by first determining what matters most to my audience that I am writing for and the relationship between my audience and the motive behind my writing. In future classes when I need to know the writing style of a new discipline, I will ask the professor the writing style I should use and what format I should do my works cited in. I could also ask what the structure should be, what the length should be, and if I should be concise or more flowery. I could also ask if the essay should be personal and opinionated or more impartial. The CGA exercise has made me more aware that different disciplines have their own style of writing. It has made me realize that I cannot just carry over my writing style from a biology or physics paper and use that same style in an English essay. I am also aware that citation style is very important to writing papers. The differences in structure in the different disciplines is also very important when it comes to writing papers.

Elissa Jacobs and Paige Eggebrecht

  • Resources for Students
  • Research and Pedagogy
  • Open access
  • Published: 10 May 2021

Comparative analysis of deep learning image detection algorithms

  • Shrey Srivastava 1 ,
  • Amit Vishvas Divekar 1 ,
  • Chandu Anilkumar 1 ,
  • Ishika Naik 1 ,
  • Ved Kulkarni 1 &
  • V. Pattabiraman 1  

Journal of Big Data volume  8 , Article number:  66 ( 2021 ) Cite this article

59k Accesses

127 Citations

1 Altmetric

Metrics details

A computer views all kinds of visual media as an array of numerical values. As a consequence of this approach, they require image processing algorithms to inspect contents of images. This project compares 3 major image processing algorithms: Single Shot Detection (SSD), Faster Region based Convolutional Neural Networks (Faster R-CNN), and You Only Look Once (YOLO) to find the fastest and most efficient of three. In this comparative analysis, using the Microsoft COCO (Common Object in Context) dataset, the performance of these three algorithms is evaluated and their strengths and limitations are analysed based on parameters such as accuracy, precision and F1 score. From the results of the analysis, it can be concluded that the suitability of any of the algorithms over the other two is dictated to a great extent by the use cases they are applied in. In an identical testing environment, YOLO-v3 outperforms SSD and Faster R-CNN, making it the best of the three algorithms.

Introduction

In recent times, the industrial revolution makes use of computer vision for their work. Automation industries, robotics, medical field, and surveillance sectors make extensive use of deep learning [ 1 ]. Deep learning has become the most talked-about technology owing to its results which are mainly acquired in applications involving language processing, object detection and image classification. The market forecast predicts outstanding growth around the coming years. The main reasons cited for this are primarily the accessibility of both strong Graphics Processing Units (GPUs) and many datasets [ 1 ]. In recent times, both these requirements are easily available [ 1 ].

Image classification and detection are the most important pillars of object detection. There is a plethora of datasets available. Microsoft COCO is one such widely used image classification domain. It is a benchmark dataset for object detection. It introduces a large-scale dataset that is available for image detection and classification [ 2 ].

This review article aims to make a comparative analysis of SSD, Faster-RCNN, and YOLO. The first algorithm for the comparison in the current work is SSD which adds layers of several features to the end network and facilitates ease of detection [ 3 ]. The Faster R-CNN is a unified, faster, and accurate method of object detection that uses a convolutional neural network. While YOLO was developed by Joseph Redmon that offers end-to-end network [ 3 ].

In this paper, by using the Microsoft COCO dataset as a common factor of the analysis and measuring the same metrics across all the implementations mentioned, the respective performances of the three above mentioned algorithms, which use different architectures, have been made comparable to each other. The results obtained by comparing the effectiveness of these algorithms on the same dataset can help gain an insight on the unique attributes of each algorithm, understand how they differ from one another and determine which method of object recognition is most effective for any given scenario.

Literature survey

Object detection has been an important topic of research in recent times. With powerful learning tools available deeper features can be easily detected and studied. This work is an attempt to compile information on various object detection tools and algorithms used by different researchers so that a comparative analysis can be done and meaningful conclusions can be drawn to apply them in object detection. Literature survey serves the purpose of getting an insight regarding our work.

The work done by Ross Girshick has introduced the Fast R-CNN model as a method of object detection [ 3 ]. It makes use of the CNN method in the target detection field. The novelty of the method proposed by Girshick has proposed a window extraction algorithm instead of a conventional sliding window extraction procedure in the R-CNN model, there is separate training for the deep convolution network for feature isolation and the support vector machines for categorization [ 4 ]. In the fast R- CNN method they have combined feature extraction with classification into a classification framework [ 3 ]. The training time is nine times faster in Fast R-CNN than in R-CNN. Whereas in the faster R-CNN method the proposal isolation region and bit of Fast R-CNN are put into a network template referred to as region proposal network (RPN). The accuracy of Fast R-CNN and Faster R-CNN is the same. The research concludes that the method is a combined, deep learning-based object detection system that works at 5–7 fps (Frames Per Second) [ 4 ]. Basic knowledge about R-CNN, Fast R-CNN and Faster R-CNN was acquired from this paper. The training of the respective model was also inspired from this paper.

Another research work done by Kim et al is discussed here. This research work uses CNN with background subtraction to build a framework that detects and recognizes moving objects using CCTV (Closed Circuit Television) cameras. It is based on the application of the background subtraction algorithm applied to each frame [ 5 ]. An architecture similar to the one in this paper was used in our work.

Another detection network is YOLO. Joseph Redmon et al have proposed You Only Look Once (YOLO)—A one-time convolutional neural network for the prediction of the frame position and classification of multiple candidates is offered by YOLO. End-to-end target detection can be achieved this way. It uses a regression problem to solve object detection. A single end-to-end system completes the process of putting the output obtained from the original image to the category and position [ 6 ]. Bounding box prediction and feature extraction of YOLO architecture in our work was inspired by the technique discussed in this paper.

Tanvir Ahmed et al have proposed a modified method that uses an advanced YOLO v1 network model which optimizes the loss of function in YOLO v1, it has a new inception model structure, has a specialized pooling pyramid layer, and has better performance. The advanced application of YOLO is taken from this research paper. It is also an end-to-end process that carries out an extensive experiment on a PASCAL VOC (Visual Object Classes) dataset. The network is an improved version and also shows high effectiveness [ 7 ]. The training of the YOLO model using PASCAL VOC was done using the technique proposed in this paper.

Wei Liu et al came up with a new method of detecting objects in images using a single deep neural network. They named this procedure the Single Shot MultiBox Detector SSD. According to the team, SSD is a simple method and requires an object proposal as it is based on the complete elimination of the process that generates a proposal. It also eliminates the subsequent pixel and resampling stages. So, it combines everything into a single step. SSD is also very easy to train and is very straightforward when it comes to integrating it into the system. This makes detection easier. The primary feature of SSD is using multiscale convolutional bounding box outputs that are attached to several feature maps [ 8 ]. Training and model analysis of the SSD model of our work was inspired by the work discussed here.

Another paper is based on an advanced type of SSD. In his paper, the authors have proposed their research work to introduce Tiny SSD, a single shot detection deep convolutional neural network. TINY SSD aimed to ease real-time embedded object detection. It comprises of greatly enhanced layers comprising of non-uniform Fire subnetwork and a stack of non-uniform subnetwork of SSD based auxiliary convolutional feature layers. The best feature of Tiny SSD is its size of 2.3 MB which is even smaller than Tiny YOLO. The results of this work have shown that Tiny SSD is well suited for embedded detections [ 9 ]. A similar model of SSD was used for the purpose of comparison.

The paper by Pathak et al describes the role of deep learning technique by using CNN for object detection. The paper also accesses some deep learning techniques for object detection systems. The current paper states that deep CNNs work on the principle of weight sharing. It gives us information about some crucial points in CNN.

These features of CNN depicted in this paper are: [ 1 ]

CNN is integration and involves the multiplication of two overlapping functions.

Features maps are abstracted to reduce their complexity in terms of space

Repetition of the process is done to produce the feature maps using filters.

CNN utilizes different types of pooling layers.

This paper was used as the basis for understanding Convolutional Neural Networks and their role in deed learning.

In a recent research work by Chen et al, they have used anchor boxes for face detection and more exact regression loss function. They have proposed a face detector termed as YOLO face which is based on YOLOv3 that aims at resolving detection problems of varying face scales. The authors concluded that their algorithm out performed previous YOLO versions and its varieties [ 10 ]. The YOLOv3 was used in our work for comparison with other models.

In the research work by Fan et al, they have proposed an improved system for the detection of pedestrians based on SSD model of object detection. In this work the multi-layered system they introduced the Squeeze-and-Excitation model as an additional layer to the SSD model. The improved model employed self-learning that further enhanced the accuracy of the system for small scale pedestrian detection. Experiments on the INRIA dataset showed high accuracy [ 11 ]. This paper was used for the purpose of understanding the SSD model.

In a recent survey published by Mittal et al, they discussed the algorithms namely Faster RCNN, Cascade RCNN, R-FCN, YOLO and its variants, SSD, RetinaNet and CornerNet, Objects as Point under advanced phases in detectors based on deep learning. This paper provides a comprehensive summary of low-altitude datasets and the algorithms used for the respective work [ 12 ]. Our comparison work was done using coco metrics similar to the comparison that has been done in this paper. The paper also discusses several other techniques for comparison which were considered in our work.

Artificial Intelligence (AI): It is a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 13 ].

Machine Learning (ML): It is the study of algorithms that improve automatically through experience [ 14 ]. ML algorithms build a training model based on sample data, and using it, make predictions or decisions without being ‘explicitly programmed to do so’.

Deep Learning (DL): It is the most used and most preferred approach to machine learning. It is inspired by the working of the biological brain—how individual neurons firing on receiving input only see a very small part of the total input/processed data. It has multiple layers. Upper layers build on the outputs from lower layers. Thus, the higher the layer, the more complex is the data it processes [ 15 ].

Identify more complex patterns—animals, faces, objects, skies, etc. A CNN consists of alternating convolutional and pooling layers with at least one fully connected layer at the end.

Evolution of CNNs

Convolutional Neural Network (CNN): It is a type of artificial neural network that is mainly used to analyse images. It was inspired by the neurological experiments conducted by Hubel and Wiesel on the visual cortex [ 17 ]. The visual cortex is the primary region processing visual sensory information in the brain. It extracts features from images and detects patterns and structures to detect objects in the images. Its distinct feature is the presence of convolutional layers that are hidden. These layers apply filters to extract patterns from images. The filter moves over the image to generates the output. Different filters recognize different patterns. Initial layers have filters to recognize simple patterns. They become more complex through the layers over time as follows:

Origin (Late 1980s–1990s): The first popular CNN was LeNet-5 developed in 1998 by LeCun et al. [ 18 ]. It was in development for almost a decade. Its purpose was to detect handwritten digits. It is credited for sparking R&D of efficient CNNs in the field of deep learning. Banks started using it in ATMs.

Stagnation (Early 2000s): The internal working of CNNs was not yet understood during this period. Also, there was no dataset of a variety of images like Google’s Open Images or Microsoft’s COCO. Hence, most CNNs were only focused on optical character recognition (OCR). CNNs also required high computational time; increasing operating cost. Support Vector Machine (SVM), a machine learning model was showing better results than CNN.

Revival (2006–2011): Ranzato et al. in their paper demonstrated that using the max-pooling algorithm for feature extraction instead of the sub-sampling algorithm used earlier results in significant improvement [ 19 ]. Researchers had started using GPUs to accelerate training of CNNs. Around the same time, NVIDIA introduced the CUDA platform that allowed and facilitated parallel processing, thus speeding up CNN training and validation [ 20 ]. This re-sparked research. In 2010, Stanford University established a large image dataset called Pattern Analysis, Statistical modelling and Computational Learning Visual Object Classes (PASCAL VOC), removing yet another hurdle.

Rise (2012–2013): AlexNet was a major breakthrough for accuracy of CNNs. It achieved an error rate of just 15.3% in the 2012 ILSVR challenge. The second-place network had an error rate of 26.2% [ 21 ]. So, AlexNet was better by a large margin of 10.8% than any other network known at the time. AlexNet achieved this accuracy by having a total of 8 layers [ 21 ], thus truly realizing ‘deep’ learning. This required greater computational power, but the advances in GPU technology made it possible. AlexNet, like LeNet is one of the most influential papers to ever be published on CNNs.

Architectural Innovations (2014–2020): The well-known and widely used VGG architecture was developed in 2014 [ 22 ]. RCNN, based on VGG like many others, introduced the idea that objects are located in certain regions of the image; hence the name: region-based CNN [ 23 ]. Improved versions of RCNN—Fast RCNN [ 24 ] and Faster RCNN [ 3 ] came out in the subsequent years. Both of these reduced computation time, while maintaining the accuracy that RCNN is known for. Single Shot Multibox Detector (SSD), also based on VGG was developed around 2016 [ 8 ]. Another algorithm, You Only Look Once (YOLO), based on an architecture called DarkNet was first published in 2016 [ 6 ]. It is in active development; its third version was released in 2018 [ 25 ].

Existing methodologies

Other object detection models such as YOLO or Faster R-CNN perform their operations at a much lesser speed as compared to SSD, making a much more favourable object detection method.

Before the development of SSD, several attempts had been made to design a faster detector by modifying each stage of the detection pipeline. However, any significant increase in speed by such modifications only resulted in a decrease in the detection’s accuracy and hence researchers concluded that rather than altering an existing model, they would have to come up with a fundamentally different object detection model, and hence, the creation of the SSD model [ 8 ].

SSD does not resample pixels or features for bounding box hypotheses and is as accurate as models that do. In addition to this, it is quite straightforward compared to methods that require object proposals because it completely eradicates feature resampling stages or pixel and proposal generation, by encompassing all computation in a single network. Therefore, SSD is very simple to train and can be easily integrated into systems that perform detection as one of their functions [ 8 ].

It’s architecture heavily depends on the generation of bounding boxes and the extraction of feature maps, which are also known as default bounding boxes. Loss is calculated by the network, using comparisons of the offsets of the predicted classes and the default bounding boxes with the training samples’ ground truth values, using different filters for every iteration. Using the back-propagation algorithm and the calculated loss value, all the parameters are updated. This way, SSD is able to learn the most optimal filter structures that can accurately identify the object features and generalize the given training samples in order to minimize the loss value, resulting in high accuracy during the evaluation phase [ 26 ].

Analysis of the functions

SSD is built on a feed-forward complex network that builds a collection of standard-size bounding boxes and for each occurrence of an object in those boxes, a respective score. After score generation, non-maximum suppression is used to generate the final detection results. The preliminary network layers are built on a standard architecture utilized for high quality image classification (and truncated before any classification layers), which is a VGG-16 network. An auxiliary structure is added to the truncated base network such as convo6 to produce detections.

Extracting feature maps: SSD uses the VGG-16 architecture to extract feature maps because it shows very good performance for the classification of images with high quality. The reason for using auxiliary layers is because they allow us to extract the required features at multiple scales as well as reduce the size of our input with each layer that is traversed through [ 8 ]. For each cell in the image, the layer makes a certain number of predications. Each prediction consists of a boundary box and the box generates scores for all the classes it detects in this box including a score for no object at all. It is an algorithm making a ‘guess’ as to what is in the boundary box by choosing the class with the highest score. These scores a called ‘confidence scores’ and making such predictions is called ‘MultiBox’. Figure  1 depicts the SSD model with the extra feature layers.

Convolutional predictors for object detection: Every feature layer produces a fixed number of predictions by utilising convolutional filters. For every feature layer of size x × y having n channels, the rudimentary component for generating prediction variables of a potential detection result is a 3 × 3 × x small kernel that creates a confidence score for every class, or a shape offset calculated with respect to the default grounding box coordinates which are provided by the COCO Dataset at every single one of the ‘x x y’ locations [ 8 ].

Default boxes and aspect ratios: By now, you may be able to infer that every single feature map cell is associated with a corresponding default bounding box for multiple feature maps in the network. The default boxes are responsible for determining the feature map in a complex manner so that the placement of each box concerning its corresponding cell is fixed. At each feature map cell, we speculate the offsets concerning the default box shapes in the cell and the scores for each class which tells us about the class of object present inside the bounding box. Going into further detail, for every box out of b at a particular given location, s class scores are calculated and its 4 offsets relative to the primal default box shape. This computation results in a total of (s + 4) b filters that are applicable to every location in the feature map, resulting in (s + 4) × b × x × y outputs for a x × y feature map. [ 8 ]

figure 1

Deep Learning Layers illustration [ 15 ]

SSD Training Process

Matching Process: All SSD predictions are divided into two types; negative matches or positive matches. Positive matches are only used by SSD to calculate the localization cost which is the misalignment of the boundary box with the default box. The match is positive only if the corresponding default boundary box’s IoU is greater than 0.5 with the ground truth. In any other case, it is negative. IoU stands for the ‘intersection over the union’. It is the ratio between the intersected area over the joined area for two regions. IoU is also referred to as the Jaccard index and using this condition makes the learning process much easier [ 8 ].

Hard negative mining: After the matching step, almost all of the default boxes are negatives, largely when the total count of possible default boxes is high. This causes a large imbalance between the positive and negative training examples. Rather than using up all the negative examples, SSD sorts them by their greatest confidence loss for each default box, the highest ones such that at any point of time, the ratio of the positives and negatives is a maximum of 3:1. This leads to faster optimization and better training [ 8 ].

Data augmentation: This is crucial for increasing accuracy. There are several data augmentation techniques that we may employ such as color distortion, flipping, and cropping. To deal with a variety of different object sizes and shapes, each training image is randomly picked using one of the methods listed below: [ 8 ].

We use the original,

Sample a patch with IoU of 0.1, 0.3, 0.5, 0.7 or 0.9,

Sample a patch randomly.

Final detection: The results are generated by performing NMS on multi-scale refined bounding boxes. Using the above-mentioned methods such as hard negative mining, data augmentation, and a larger number of other methods, SSD’s performance is much greater than that of Faster R-CNN when it comes to accuracy on PASCAL VOC dataset and the COCO dataset, while being three times faster [ 26 ]. The SSD300, where the size of the input image is 300_300, runs at 59 FPS, which is much more efficient and accurate than YOLO. However, SSD is not as efficient at detection for smaller objects, which can be solved by having a more efficient feature extractor backbone (e.g., ResNet101), with the addition of deconvolution layers along with skip connections to create additional large-scale context, and design a better network structure [ 27 ].

Complexity analysis

For most algorithms,time-complexity is dependent on the size of input and can be defined in terms of the big-Oh notation. However,for deep-learning models, time complexity is evaluated in terms of the total time taken by SSD to be trained and the inference time when the model is run on specific hardware (Fig. 2 ).

figure 2

Evolution of CNNs from 1979 through 2018 [ 16 ]

Deep learning models are required to carry out millions of calculations which can prove to be quite expensive computationally, however most of these calculations end up being performed parallelly by the thousands of identical neurons in each layer of the artificial neural network. Due to this parallel nature , it has been observed that training an SSD model in a Nvidia GeForce GTX 1070i GPU reduces the training time by a factor of ten [ 28 ].

When it comes to time-complexity, matrix multiplication in the forward pass of the base CNN takes up the most amount of time. The total number of multiplications is dependent on the number of layers in the CNN along with more specific details such as the number of neurons per layer, the amount of filters along with their respective sizes, the size of the feature extraction map and the image’s resolution. The activation function used at each layer is a ReLu function that has been found to run in quadratic time for each neuron in each layer. Hence, taking all these factors into account, we can determine the time-complexity of the forward pass at the base CNN :

Here, b denotes the index of the CNN layer, B is the total amount of CNN layers,x b is the number of filters in the b th layer,h is the filter width and height, x c is the number of neurons, x b-1 is the total number of input channels of the b th layer, s b is the size of the output feature map.

It should be noted that five to ten percent of the training time is taken up by things like dropout,regression,batch normalisation,classification as well.

As for SSD’s accuracy, it is determined by Mean Average Precision or mAP which is simply the average of APs over all classes from the area under the precision-recall curve. A higher mAP is an indication of a more accurate model [ 28 ].

Faster R-CNN

R-CNN stands for Region-based Convolutional Neural Networks. This method combines region proposals for object segmentation and high capacity CNNs for object detection [ 28 ].

The algorithm of the original R-CNN technique is as follows: [ 29 ]

Using a Selective Search Algorithm, several candidate region proposals are extracted from the input image. In this algorithm, numerous candidate regions are generated in initial sub-segmentation. Then, regions which are similar are combined to form bigger regions using a greedy algorithm. These regions make up the final region proposals.

The CNN component warps the proposals and extracts distinct features as a vector output.

The features which are extracted are fed into an SVM (Support Vector Machine) for recognizing objects of interest in the proposal.

Figure 4 given below explains the features and working of R-CNN.

This technique was plagued by a lot of drawbacks. The requirement to classify ~2000 region proposals make the training of the CNN a very time-consuming process. This makes real-time implementation impossible as each test image would take close to 47 seconds for execution.

Furthermore, machine learning could not take place as the Selective Search Algorithm is a fixed algorithm. This could result in non-ideal candidate region proposals being generated [ 29 ].

Fast R-CNN is an algorithm for object detection that solves some of the drawbacks of R-CNN. It uses an approach similar to that of its predecessor, but as opposed to using region proposals, the CNN utilizes the image itself for creating a convolutional feature map, following which region proposals are determined and warped from it. An RoI (Region of Interest) pooling layer is employed for reshaping the warped squares according to a predefined size for a fully connected layer to accept them. The region class is then predicted from the RoI vector with the help of a SoftMax layer [ 24 ].

Fast R-CNN is faster than its predecessor because feeding ~2000 proposals as input to the CNN per execution is not required. The convolution operation is done to generate a feature map only once per image. [ 24 ] The Fig. 3 given below describes the features and working of Fast RCNN.

figure 3

SSD model [ 8 ]

This algorithm shows a significant reduction in time required for both training and testing when compared to R-CNN. But it was noticed that including region proposals significantly bottlenecks the algorithm, reducing its performance [ 3 ].

Both Fast R-CNN and its predecessor used Selective Search as the algorithm for determining the region proposals. This being a very time-sapping algorithm, Faster R-CNN eliminated the need for its implementation and instead let the proposals be learned by the network. Just as in the case of Fast R-CNN, a convolutional map is obtained from the image. But a separate network replaces the Selective Search algorithm to predict proposals. These proposals are then reshaped and classified using RoI (Region of Interest) pooling. Refer to the Fig. 4 for the working of Faster R-CNN.

figure 4

R-CNN model [ 15 ]

Faster R-CNN offers an improvement over its predecessors so significant that it is now capable of being implemented for real-time object detection.

Architecture of faster R-CNN

The original implementation of Faster Region-based Convolutional Neural Network (Faster R-CNN) algorithm was experimented on two architectures of convolutional networks: The ZF (Zeiler and Fergus) model, with 5 convolutional layers that a Fast R-CNN network shares with it; and the VGG-16(Simonyan and Zisserman) model, with 13 convolutional layers shared [ 3 ] .

The ZF model is based on an earlier model of a Convolutional Network (made by Krizhevsky, Sutskever and Hinton) [ 30 ] . This model consisted of eight layers, of which five were convolutional and the remaining three were fully connected [ 21 ] .

This architecture exhibited quite a few problems. The first layer filters had negligible coverage medium frequency information compared to that of the very extremes, and the large stride 4 used in the first layer caused aliasing artifacts in the second layer. The ZF model fixed these issues by reducing the size of the first and second layer and making the convolution stride 2, allowing it to hold more information in the first and second layers, and improve classification performance [ 30 ] .

Region based Convolutional Neural Network (RCNN) and Fast-RCNN both use Selective Search. Selective Search is a greedy algorithm. Greedy algorithms don’t always return the best result [ 31 ]. Also, it needs to run multiple times. However, RCNN runs selective search about 2000 times on the image. Fast-RCNN extracts all the regions first and runs selective search just once. This way it reduces time complexity by a large factor [ 3 ]. Faster RCNN (FRCNN) removes the final bottleneck—Selective Search. It does so by instead using the Region Proposal Network (RPN). RPN fixes the regions as a grid of n × n. It needs to run fewer number of times as compared to selective search [ 3 ] .

As shown in the diagram above, FRCNN consists of Deep Fully Convolutional Network (DFCN), Region Proposal Network, ROI pooling, Fully Connected (FC) networks, Bounding Box Regressor and Classifier.

We will consider DFCN to be ZF-5 for consistent calculation [ 30 ]. First feature map, M of dimensions 256 × n × n is extracted from input image, P [ 33 ]. Then it is fed to RPN and ROI.

RPN: There are ‘k’ anchors for each point on M. Hence, Total anchors = n × n × k. Anchors are ranked according to score; 2000 anchors are obtained through Non-Maximum Suppression [ 3 ]. The Complexity comes out to be O(N2/2).

ROI: Anchors get divided into H × W grid of sub-windows based on M. Output grid is obtained by max-pooling values in corresponding sub-windows. ROI is special case of spatial pyramid pooling layer used in SPP-net, with just one pyramid layer [ 24 ]. Hence, complexity becomes O(1) .

In modern times YOLO (You Only Look Once) is one of the most precise and accurate object detection algorithms available. It has been made on the basis of a newly altered and customized architecture named Darknet [ 25 ]. The first version was inspired by Google Net, which used tensor to sample down the image and predicted it with the maximum accuracy. The tensor is generated on the basis of a similar procedure and structure which is also seen in the Region of Interest that is pooled and compiled to decrease the number of individual computations and make the analysis swifter) that is used in the Faster R-CNN network. The following generation utilized an architecture with just 30 convolutional layers, that in turn consisted of 19 layers from DarkNet-19 and an extra 11 for detection of natural objects or objects in natural context as the COCO dataset and metrics have been used. It provided more precise detection and with good speed, although it struggled with pictures of small objects and small pixels. But version 3 has been the greatest and most accurate version of YOLO which has been used widely because of its high precision. Also, the architecture with multiple layers has made the detection more precise [ 26 ].

YOLOv3 makes use of the latest darknet features like 53 layers and it has undergone training with one of the most reliable datasets called ImageNet. The layers used are from an architecture Darnnet-53 which is convolutional in nature. For detection, the aforementioned 53 layers were supplemented instead of the pre-existing 19 and this enhanced architecture was trained and instructed with PASCAL VOC. After so many additional layers the architecture maintains one of the best response times with the accuracy offered. It also is very helpful in analysing live video feed because of its swift data unsampling and object detection techniques. One can notice that this version is the best enhancements in ML (Machine Learning) using neural networks. The previous version did not work well with the images of small pixels but the recent updates in v3 have made it very useful in analysing satellite imaging even for defence departments of some countries. The architecture performs in 3 different layers which makes it more efficient but the process is a little slower yet state-of-the-art. For understanding, the framework refers to the Fig. 5 given below.

figure 5

Fast R-CNN [ 16 ]

Feature extraction and analysis [ 34 ]

1. Forecasting: This model utilizes packages of different lengths and breadths to produce the weights and frames that establish a strong foundation. This technique is an individual where the network determines the objectivity and allocation independently. The logical regression is used by YOLOv3 where it foresees the objectivity score. It is projected over the selection frame initially on the object that has been established to be the fundamental truth in the picture by pre-training models [ 35 ]. This gives a singular bounding box and any kind of fallacy in this part would cause mistakes in both allocation of these boxes and their accuracy and also in the detection arrear. The bounding box forecasting is depicted in the equation given below and Fig.  6 .

figure 6

Faster R-CNN [ 3 ]

Equations for bounding box forecasting [ 34 ]

2. Class Prediction: YOLOv3 executes a soft-max function to alter the scores to an understandable format for the code. The format is 1. YOLOv3 uses multiple classifications by tag. These tags are custom and non-exclusive. For eg. ‘man’ and ‘woman’ are not exclusive. The architecture modifies the function with individualistic logistic classifiers. YOLOv3 uses binary loss function initially. It uses the soft-max function after that. This leads to a reduction in complexity by avoiding it for the first implementation [ 36 ].

3. Predictions: Three distinct orders and dimensions are used for pre-determining the bounding boxes. These are in combination with the function extractor, DarkNet-53. The last levels include detection and categorization into object classes. 3 takes are what is taken on each scale of the COCO dataset. That leads to more than 70 class predictions as an o/p tensor. These features are a classic coder-decoder design introduced in Single-Shot-Detector. The grouping of k-means is also used for finding the best bounding boxes. Finally, in the COCO dataset dimensions like 10 × 13, 62 × 45 and others are used. In total there are 9 distinct dimensions including the aforementioned.

4. DarkNet-53 - The feature Extractor: YOLOv2 had the implementation of DarkNet-19 but in the recently modified model of YOLO Darknet-53 is being used where the 53 is 53 convolutional levels. Speed and accuracy both are an enhanced in Darknet 53 making it 1.5 times quicker. When this architecture is put to compete with ResNet-152, it almost the same performance in terms of accuracy and precision but it is twice as fast [ 37 ]. The following Fig. 7 shows the YOLO model.

figure 7

CNN of the Krizhevsky model [ 21 ]

The YOLO network is based on a systematic division of the given image into grid. The grids are of 3 types which will be mentioned later. These grids serve as a separate image for the algorithm and they undergo further divisions. YOLO utilizes boundaries that are called bounding boxes. These are the anchors for the analysis of an image. These boxes are essentially acknowledged as resulted even though thousands and thousands are ignored because of the low probability scores and are treated as false positives. These boxes are the manifestation of the rigorous breaking down of an image into grids of cells [ 38 , 39 , 40 ].

For determining suitable anchor box sizes, YOLO uses K-means clustering to clutch the boxes among the training data. These prior boxes are the guidelines for the algorithm. After receiving the aforementioned data, the algorithm looks for objects with symmetrical shape and size. YOLO uses 3 boxes as anchor so each grid cell puts out 3 boxes. The further predictions and analysis are based on these 3 anchor boxes. Some cases and studies involve the use of 2 anchor box leading to 2 boxes per grid cell [ 39 ].

In the above Fig. 8 , we can see the anchor box as the dashed box and the forecast of the ground truth or the bounding box is the box with the highlighted borders. There are multiple examples of sizes of image floating around. Each have a distinctive grid cell size and shape. For our model we have taken the standard 448 × 448 image size. The other sizes used for analysis are 416 × 416, 608 × 608 etc. and the grid sizes for them are 19 × 19, 38 × 38 & 76 ×76 and 13 × 13, 52 × 52 & 26 × 26 respectively [ 40 , 41 ].

figure 8

Bounding box forecasting [ 34 ]

For the first step, the image is modified and altered to a size of 448 x 448 and then the image is put through a slice and dice system where they are divided into 7 x 7 size. This implies that the size of each grid is of size 64 x 64. Every single one of these grid cells produce a certain number of bounding boxes. It may vary from version to version (multiple versions in YOLOv3). For our model we are using 2 boxes per grid. This gives us 4 coordinates per bounding box. They are x center , y center , width, height. Also, there’s a corresponding confidence value [ 32 ].

Use of K-means clustering algorithm gives exponential time complexity O(n kd ) where k is the number of images and d is the dimension of the images. After a thorough and stable optimisation technique, the creators have made YOLOv3 the fastest image detection algorithm among the ones mentioned in the paper.

MICROSOFT COCO

In recent times for the search of a perfect combination of algorithm and data set, contenders have used the top and highly rated deep learning architectures and data sets. They are used for arriving at the best possible precision and accuracy. The most commonly used data sets are PASCAL VOC and Microsoft COCO. For the review analysis, COCO is used as a dataset and an evaluation metric. They applied different ways of analysis, tweaking and calibrating the base networks and adjusting the software; that leads to better precision but also for improving accuracy, speed, and local split performance [ 26 ].

For Object detection, the use of computationally costly architectures and algorithms such as RCNN, SPP-NET (Spatial Pyramid Pooling Network) the use of smart data sets having varied objects and images which also have various objects and are of different dimensions have become a necessity. Not to forget the extreme scope in live video feed monitoring the cost of detection becomes too high. Recently the advancement in deep learning architectures has lead algorithms like YOLO and SSD networks to detect objects by the access to a singular NN (neural network). The introduction of latest architectures has increased the competition between various techniques [ 26 ]. But recently COCO has emerged as the most used data set for training and classification. Also, more developments have made it alterable for adding classes [ 2 ].

Furthermore, COCO is better than other popular widely used data sets as per some research papers [ 2 ]. They are namely Pattern Analysis, Statistical Modelling and Computational Learning Visual Object Classes, ImageNet & SUN (Scene Understanding). The above-mentioned data sets vary hugely based on size, categories, and types. ImageNet was made to target a wider category where the number of different categories but they were fine-grained. SUN focused on more of a modular approach where the regions of interest were based on the frequency of them occurring in the data set. Finally, PASCAL VOC’s was similar yet different in approach to COCO. It used a wide range of images taken from the environment and nature. Microsoft Common Objects in Context is made for the detection and classification of the objects in their classic natural context [ 2 ].

Annotation pipeline [ 2 ]

As seen in the following Fig. 9 an annotation pipeline explains the identification and categorization of a particular image.

figure 9

The ZF model [ 30 ]

This type of annotation pipeline gives a better perspective to object detection algorithms. Training algorithms using these diverse images and advanced concepts like crowd scheduling and visual segmentation. Following Fig. 10 gives the detailed categories that are available in MS COCO. The 11 super-categories are Person and Accessories, Animal, Vehicle, Outdoor Objects, Sports, Kitchenware, Food, Furniture, Appliance, Electronics, and Indoor Objects [ 42 ].

figure 10

FRCNN Architecture [ 32 ]

Pascal VOC (Visual Object Classes)

The challenge.

The Pascal VOC (Visual Object Classes) Challenges were a series of challenges that took place from 2005 to 2012 which consisted of two components: A public dataset which contained images from the Flickr website, their annotations and software for evaluation; and a yearly event consisting of a competition and a workshop. The main objectives of the challenge were classification, detection, and segmentation of the images. There were also two additional challenges of action classification and person layout [ 43 ].

The Datasets

The datasets used in the Pascal VOC Challenges consist of two subsets: a trainval dataset, which was further classified into separate sets for training and validation; and a test dataset. All the contained images are fully annotated with the help of bounding boxes for all instances of the following objects for the classification and detection challenges: [ 43 ]

Along with these annotations, attributes such as viewpoint, truncation, difficult, consistent, accurate and exhaustive were specified, some of which were added in later editions of the challenge [ 44 ].

Experimental set up

The hardware comprised of 8 GB DDR5 Random Access Memory, 1 TB Hard Disk Drive, 256 GB Solid State Drive and Intel Core processor i5 8th Generation which clocks at a speed 1.8Ghz (Figs. 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , and 20 ).

figure 11

YOLO architecture [ 26 ]

figure 12

YOLO model ConvNet [ 37 ]

figure 13

Categories of images [ 42 ]

figure 15

The classes of objects considered in the challenge [ 43 ]

figure 16

Statistics of the VOC2012 datasets [ 43 ]

figure 17

Graph for SSD [ 26 ]

figure 18

Graph for faster RCNN [ 26 ]

figure 19

Graph for YOLO [ 26 ]

figure 20

Compared with YOLOv3, the new version of AP (accuracy) and FPS (frame rate per second) are improved by 10% and 12%, respectively [ 46 ]

The software configuration put to use is the Google Colab using inbuilt engine called Python 3 Google Compute Engine Backend. It provides a RAM of 12.72 GB of which 3.54 was used at an average. Also, it provides a disk space of 107.77 GB of which 74.41 GB was used which included the training and validation datasets. The hardware accelerator used was the synthetic GPU offered by Google Colab (Tables 1 and 2 ).

Results and discussions

Two performance metrics are applied to object detecting models for testing. These are ‘Average Precision’ and an F1 score. The predicted bounding boxes are compared with the ground truth bounding boxes by the detector according to IOU (Intersection Over Union). The ‘True Positive’, ‘False Negative’, and ‘False Positive’ are defined and then used for the calculation of precision and recall which in turn are used for calculating the F1 score. The Formulae for these are as follows. [ 42 ]

Precision = TP/ (TP +FP’)

Recall = TP/ (TP + FN’)

And using these, F1 score = 2*Precision*Recall/(Precision + Recall)

Apart from these two, the performance of the models is also measured using the following metrics given by the COCO metrics API. [ 42 ]

Using all these, the outcomes for all three algorithms were compared in order to compare their performance. The outcomes were as follows:

Results comparison

Following were some limitations that were observed in the three models

When it comes to smaller objects, SSD’s performance is much worse as compared to Faster R-CNN. The main reason for this drawback, is that in SSD, higher resolution layers are responsible for detecting small objects. However, these layers are less useful for classification as they contain lower-level features such as colour patches or edges, thereby reducing the overall performance of SSD [ 8 ].

Another limitation of this method which can be inferred from the complexity of SSD’s data augmentation, is that SSD requires a large amount of data for training purposes. This can be quite expensive and time-consuming depending on the application [ 8 ]

Accuracy of this algorithm comes at the cost of time complexity. It is significantly slower than the likes of YOLO.

Despite improvements over RCNN and Fast RCNN, it still requires multiple passes over a single image unlike YOLO [ 3 ] 3

FRCNN has many components—the convolutional network, Regions of Interest (ROI) pooling layer and Region Proposal Network (RPN). Any of these can serve as a bottleneck for the others [ 3 ].

YOLOv3 was one of the best modifications that had been done to an object detection system since the introduction of Darknet 53. This modified update was received very well among the critics and other industrial professionals. But it had its own shortcomings. Though YOLOv3 is still considered to be a veteran, the complexity analysis showed flaws and lacked optimal solutions to the loss function. It was later rectified in an optimized model of the same and was later used and tested for functionality enhancements [ 45 ].

A better version of a given software is the best to analyse the faults in the former. After analysing the paper on YOLOv4 we can see that version 3 used to fail when the image had multiple features to be analysed but they weren’t the highlight of the pic. The lack of accuracy was always an issue when it came to smaller images. It was basically useless to use version 3 to analyse small images because the accuracy was around 16% (proven by our data). Another matter to be looked at is that the use of Darknet 53. YOLOv4 has brought in CSPDarknet-53 which is better than Darknet-53 as it uses only 66% of the number of parameters that version 3 used to use but gives a better result which enhanced speed and accuracy [ 46 ].

The precision-recall curves plotted using the COCO metric, API, allowed us to form proper deductions about the efficiency with which these three models perform object detection. Graphs were plotted for each model based on different object sizes.

The area shaded in orange indicates the precision-recall curve without any errors, the area shaded in violet indicates the objects that were falsely detected, the area shaded in blue indicates the localisation errors (Loc). Lastly, the areas under the precision-recall curve that are white indicates an IoU value greater than 0.75 and area shaded in grey indicates an IoU value greater than 0.5.

From the graphs of the three models, it is evident that both region-based detectors like F R-CNN and SSD both have low accuracy due to their relatively larger violet areas. However, amongst themselves, F R-CNN is more accurate than SSD while SSD is more efficient for real-time processing applications due to its higher mAP values. YOLO is clearly the most efficient of the all evident from its almost non-existent violet regions.

This review article compared the latest and most advanced CNN-based object detection algorithms. Without object detection, it would be impossible to analyse the hundreds of thousands of images that are uploaded to the internet every day [ 42 ]. Technologies like self-driving vehicles that depend on real-time analysis are also impossible to realize without object detection. All the networks were trained with the open-source COCO dataset by Microsoft, to ensure a homogeneous baseline. It was found that Yolo-v3 is the fastest with SSD following closely and Faster RCNN coming in the last place. However, it can be said that the use case influences which algorithm is picked; if you are dealing with a relatively small dataset and don’t need real-time results, it is best to go with Faster RCNN. Yolo-v3 is the one to pick if you need to analyse a live video feed. Meanwhile, SSD provides a good balance between speed and accuracy. Additionally, Yolo-v3 is the most recently released of the three and is actively being contributed to by the vast open-source community. Hence, in conclusion, out of the three Object Detection Convolutional Neural Networks analysed, Yolo-v3 shows the best overall performance. This result is similar to what some of the previous reports have obtained.

A great deal of work can still be done in the future in this field. Every year, either new algorithms or updates to existing ones are published. Also, each field—aviation, autonomous vehicles (aerial and terrestrial), industrial machinery, etc. are suited to different algorithms.

These subjects can be explored in detail in the future.

Availability of data and materials

Coco dataset used in the paper is available from the website https://cocodataset.org/#explore .

Abbreviations

Faster Region based Convolutional Neural Network

Single Shot Detector

You Look Only Once version 3

Common Objects in Context

Visual Geometry Group 16

Pathak AR, Pandey M, Rautaray S. Application of deep learning for object detection. Procedia Comput Sci. 2018;132:1706–17.

Article   Google Scholar  

Palop JJ, Mucke L, Roberson ED. Quantifying biomarkers of cognitive dysfunction and neuronal network hyperexcitability in mouse models of Alzheimer’s disease: depletion of calcium-dependent proteins and inhibitory hippocampal remodeling. In: Alzheimer's Disease and Frontotemporal Dementia. Humana Press, Totowa, NJ; 2010, p. 245–262.

Ren S, He K, Girshick R, Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2016;39(6):1137–49.

Ding S, Zhao K. Research on daily objects detection based on deep neural network. IOP Conf Ser Mater Sci Eng. 2018;322(6):062024.

Kim C, Lee J, Han T, Kim YM. A hybrid framework combining background subtraction and deep neural networks for rapid person detection. J Big Data. 2018;5(1):22.

Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016, pp. 779–788.

Ahmad T, Ma Y, Yahya M, Ahmad B, Nazir S. Object detection through modified YOLO neural network. Scientific Programming, 2020.

Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC. Ssd: single shot multibox detector. In: European conference on computer vision. Cham: Springer; 2016, p. 21–37.

Womg A, Shafiee MJ, Li F, Chwyl B. Tiny SSD: a tiny singleshot detection deep convolutional neural network for real-time embedded object detection. In: 2018 15th conference on computer and robot vision (CRV). IEEE; 2018, p. 95101

Chen W, Huang H, Peng S, Zhou C, Zhang C. YOLO-face: a real-time face detector. The Visual Computer 2020:1–9.

Fan D, Liu D, Chi W, Liu X, Li Y. Improved SSD-based multi-scale pedestrian detection algorithm. In: Advances in 3D image and graphics representation, analysis, computing and information technology. Springer, Singapore; 2020, p. 109–118.

Mittal P, Sharma A, Singh R. Deep learning-based object detection in low-altitude UAV datasets: a survey. Image and Vision Computing 2020:104046.

Kaplan A, Haenlein M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz. 2019;62(1):15–25.

Mitchell T. Machine learning. New York: McGraw Hill; 1997.

MATH   Google Scholar  

Schulz H, Behnke S. Deep learning. KI-Künstliche Intelligenz. 2012;26(4):357–63.

Khan A, Sohail A, Zahoora U, Qureshi AS. A survey of the recent architectures of deep convolutional neural networks. Artif Intell Rev. 2020;53(8):5455–516.

Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol. 1962;160(1):106–54.

LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324.

Ranzato MA, Huang FJ, Boureau YL, LeCun Y. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: 2007 IEEE conference on computer vision and pattern recognition. IEEE; 2007, p. 1–8.

Nickolls J, Buck I, Garland M, Skadron K. Scalable parallel programming with cuda: Is cuda the parallel programming model that application developers have been waiting for? Queue. 2008;6(2):40–53.

Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012;25:1097–105.

Google Scholar  

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556; 2014.

Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2014, p. 580–7.

Girshick R. Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision; 2015, p. 1440–8.

Redmon J, Farhadi A. Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767; 2018.

Alganci U, Soydas M, Sertel E. Comparative research on deep learning approaches for airplane detection from very high-resolution satellite images. Remote Sensing. 2020;12(3):458.

Zhao ZQ, Zheng P, Xu ST, Wu X. Object detection with deep learning: a review. IEEE Trans Neural Netw Learn Syst. 2019;30(11):32123232.

Reza Z. N. (2019). Real-time automated weld quality analysis from ultrasonic B-scan using deep learning (Doctoral dissertation, University of Windsor (Canada)).

Shen X, Wu Y. A unified approach to salient object detection via low rank matrix recovery. In: 2012 IEEE conference on computer vision and pattern recognition. IEEE; 2012, p. 853–60.

Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: European conference on computer vision. Cham: Springer, 2014, p. 818–33.

Uijlings JR, Van De Sande KE, Gevers T, Smeulders AW. Selective search for object recognition. Int J Comput Vision. 2013;104(2):154–71. https://doi.org/10.1007/s11263-013-0620-5 .

Wu J. Complexity and accuracy analysis of common artificial neural networks on pedestrian detection. In MATEC Web of Conferences, Vol. 232. EDP Science; 2018, p. 01003.

He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In: European conference on computer vision. Cham: Springer; 2016, p. 630–45.

Xu D, Wu Y. Improved YOLO-V3 with DenseNet for multi-scale remote sensing target detection. Sensors. 2020;20(15):4276.

Butt UA, Mehmood M, Shah SBH, Amin R, Shaukat MW, Raza SM, Piran M. A review of machine learning algorithms for cloud computing security. Electronics. 2020;9(9):1379.

Ketkar N, Santana E. Deep learning with Python, vol. 1. Berkeley: Apress; 2017.

Book   Google Scholar  

Jiang R, Lin Q, Qu S. Let blind people see: real-time visual recognition with results converted to 3D audio. Report No. 218, Stanford University, Stanford, USA; 2016.

Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Rabinovich A. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015, p. 1–9.

Zhao L, Li S. Object detection algorithm based on improved YOLOv3. Electronics. 2020;9(3):537.

Syed NR. A PyTorch implementation of YOLOv3 for real time object detection (Part 1). [Internet] [Updated Jun 30 2020]. https://nrsyed.com/2020/04/28/a-pytorch-implementation-of-yolov3-for-real-time-object-detection-part-1/ . Accessed 02 Feb 2021.

Ethan Yanjia Li. Dive really deep into YOLOv3: a beginner’s guide. [Internet][Posted on December 30 2019] Available at https://yanjia.li/dive-really-deep-into-yolo-v3-a-beginners-guide/ . Accessed 31 Jan 2021.

COCO. [Internet]. https://cocodataset.org/#explore . Accessed 28 Oct 2020.

Everingham M, Eslami SA, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes challenge: a retrospective. Int J Comput Vision. 2015;111(1):98–136.

Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (voc) challenge. Int J Comput Vision. 2010;88(2):303–38.

Huang YQ, Zheng JC, Sun SD, Yang CF, Liu J. Optimized YOLOv3 algorithm and its application in traffic flow detections. Appl Sci. 2020;10(9):3079.

Bochkovskiy A, Wang CY, Liao HYM. Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020.

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Vellore Institute of Technology (Chennai Campus), Kelambakkam - Vandalur Rd, Rajan Nagar, Chennai, Tamil Nadu, 600127, India

Shrey Srivastava, Amit Vishvas Divekar, Chandu Anilkumar, Ishika Naik, Ved Kulkarni & V. Pattabiraman

You can also search for this author in PubMed   Google Scholar

Contributions

SS: Research and Implementation of YOLO Algorithm. Comparative Analysis. AVD: Research and Implementation of Faster RCNN Algorithm. Comparative Analysis. CA: Research and Implementation on Faster RCNN Algorithm. Comparative Analysis. IN: Research and Implementation of SSD Algorithm. Comparative Analysis. VK: Research and Implementation on SSD Algorithm. Comparative Analysis. VP: Verification of results obtained through implementations. Approval of final manuscript.

Corresponding author

Correspondence to Shrey Srivastava .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Srivastava, S., Divekar, A.V., Anilkumar, C. et al. Comparative analysis of deep learning image detection algorithms. J Big Data 8 , 66 (2021). https://doi.org/10.1186/s40537-021-00434-w

Download citation

Received : 12 December 2020

Accepted : 22 February 2021

Published : 10 May 2021

DOI : https://doi.org/10.1186/s40537-021-00434-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Object detection
  • COCO dataset

comparative analysis of research papers

comparative analysis of research papers

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  •  We're Hiring!
  •  Help Center

Comparative Analysis

  • Most Cited Papers
  • Most Downloaded Papers
  • Newest Papers
  • Save to Library
  • Last »
  • Fungal proteomics Follow Following
  • Stock Price Follow Following
  • Dividends Follow Following
  • Metabolism Follow Following
  • Instituto Nacional de Física y Química Follow Following
  • Yeasts Follow Following
  • Paired Samples T-Test Follow Following
  • Polyelectrolytes Follow Following
  • Paclitaxel Follow Following
  • Seed dormancy Follow Following

Enter the email address you signed up with and we'll email you a reset link.

  • Academia.edu Publishing
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Open access
  • Published: 23 September 2023

Educational interventions targeting pregnant women to optimise the use of caesarean section: What are the essential elements? A qualitative comparative analysis

  • Rana Islamiah Zahroh   ORCID: orcid.org/0000-0001-7831-2336 1 ,
  • Katy Sutcliffe   ORCID: orcid.org/0000-0002-5469-8649 2 ,
  • Dylan Kneale   ORCID: orcid.org/0000-0002-7016-978X 2 ,
  • Martha Vazquez Corona   ORCID: orcid.org/0000-0003-2061-9540 1 ,
  • Ana Pilar Betrán   ORCID: orcid.org/0000-0002-5631-5883 3 ,
  • Newton Opiyo   ORCID: orcid.org/0000-0003-2709-3609 3 ,
  • Caroline S. E. Homer   ORCID: orcid.org/0000-0002-7454-3011 4 &
  • Meghan A. Bohren   ORCID: orcid.org/0000-0002-4179-4682 1  

BMC Public Health volume  23 , Article number:  1851 ( 2023 ) Cite this article

1237 Accesses

1 Citations

1 Altmetric

Metrics details

Caesarean section (CS) rates are increasing globally, posing risks to women and babies. To reduce CS, educational interventions targeting pregnant women have been implemented globally, however, their effectiveness is varied. To optimise benefits of these interventions, it is important to understand which intervention components influence success. In this study, we aimed to identify essential intervention components that lead to successful implementation of interventions focusing on pregnant women to optimise CS use.

We re-analysed existing systematic reviews that were used to develop and update WHO guidelines on non-clinical interventions to optimise CS. To identify if certain combinations of intervention components (e.g., how the intervention was delivered, and contextual characteristics) are associated with successful implementation, we conducted a Qualitative Comparative Analysis (QCA). We defined successful interventions as interventions that were able to reduce CS rates. We included 36 papers, comprising 17 CS intervention studies and an additional 19 sibling studies (e.g., secondary analyses, process evaluations) reporting on these interventions to identify intervention components. We conducted QCA in six stages: 1) Identifying conditions and calibrating the data; 2) Constructing truth tables, 3) Checking quality of truth tables; 4) Identifying parsimonious configurations through Boolean minimization; 5) Checking quality of the solution; 6) Interpretation of solutions. We used existing published qualitative evidence synthesis to develop potential theories driving intervention success.

We found successful interventions were those that leveraged social or peer support through group-based intervention delivery, provided communication materials to women, encouraged emotional support by partner or family participation, and gave women opportunities to interact with health providers. Unsuccessful interventions were characterised by the absence of at least two of these components.

We identified four key essential intervention components which can lead to successful interventions targeting women to reduce CS. These four components are 1) group-based delivery, 2) provision of IEC materials, 3) partner or family member involvement, and 4) opportunity for women to interact with health providers. Maternal health services and hospitals aiming to better prepare women for vaginal birth and reduce CS can consider including the identified components to optimise health and well-being benefits for the woman and baby.

Peer Review reports

Introduction

In recent years, caesarean section (CS) rates have increased globally [ 1 , 2 , 3 , 4 ]. CS can be a life-saving procedure when vaginal birth is not possible; however, it comes with higher risks both in the short- and long-term for women and babies [ 1 , 5 ]. Women with CS have increased risks of surgical complications, complications in future pregnancies, subfertility, bowel obstruction, and chronic pain [ 5 , 6 , 7 , 8 ]. Similarly, babies born through CS have increased risks of hypoglycaemia, respiratory problems, allergies and altered immunity [ 9 , 10 , 11 ]. At a population level, CS rates exceeding 15% are unlikely to reduce mortality rates [ 1 , 12 ]. Despite these risks, an analysis across 154 countries reported a global average CS rate of 21.1% in 2018, projected to increase to 28.5% by 2030 [ 3 ].

There are many reasons for the increasing CS rates, and these vary between and within countries. Increasingly, non-clinical factors across different societal dimensions and stakeholders (e.g. women and communities, health providers, and health systems) are contributing to this increase [ 13 , 14 , 15 , 16 , 17 ]. Women may prefer CS over vaginal birth due to fear of labour or vaginal birth, previous negative experience of childbirth, perceived increased risks of vaginal birth, beliefs about an auspicious or convenient day of birth, or beliefs that caesarean section is safer, quick, and painless compared to vaginal birth [ 13 , 14 , 15 ].

Interventions targeting pregnant women to reduce CS have been implemented globally. A Cochrane intervention review synthesized evidence from non-clinical interventions targeting pregnant women and family, providers, and health systems to reduce unnecessary CS, and identified 15 interventions targeting women [ 18 ]. Interventions targeting women primarily focused on improving women’s knowledge around birth, improving women’s ability to cope during labour, and decreasing women’s stress related to labour through childbirth education, and decision aids for women with previous CS [ 18 ]. These types of interventions aim to reduce the concerns of pregnant women and their partners around childbirth, and prepare them for vaginal birth.

The effectiveness of interventions targeting women in reducing CS is mixed [ 18 , 19 ]. Plausible explanations for this limited success include the multifactorial nature of the factors driving increases in CS, as well as the contextual characteristics of the interventions, which may include the study environment, participant characteristics, intensity of exposure to the intervention and method of implementation. Understanding which intervention components are essential influencers of the success of the interventions is conducive to optimising benefits. This study used a Qualitative Comparative Analysis (QCA) approach to re-analyse evidence from existing systematic reviews to identify essential intervention components that lead to the successful implementation of non-clinical interventions focusing on pregnant women to optimise the use of CS. Updating and re-analysing existing systematic reviews using new analytical frameworks may help to explore the heterogeneity in effects and ascertain why some studies appear to be effective while others are not.

Data sources, case selection, and defining outcomes

Developing a logic model.

We developed a logic model to guide our understanding of different pathways and intervention components potentially leading to successful implementation (Additional file 1 ). The logic model was developed based on published qualitative evidence syntheses and systematic reviews [ 18 , 20 , 21 , 22 , 23 , 24 ]. The logic model depicts the desired outcome of reduced CS rates in low-risk women (at the time of admission for birth, these women are typically represented by Robson groups 1–4 [ 25 ] and are women with term, cephalic, singleton pregnancies without a previous CS) and works backwards to understand what inputs and processes are needed to achieve the desired outcome. Our logic model shows multiple pathways to success and highlights the interactions between different levels of factors (women, providers, societal, health system) (Additional file 1 ). Based on the logic model, we have separated our QCA into two clusters of interventions: 1) interventions targeting women, and 2) interventions targeting health providers. The results of analysis on interventions targeting health providers have been published elsewhere [ 26 ]. The logic model was also used to inform the potential important components that influence success.

Identifying data sources and selecting cases

We re-analysed the systematic reviews which were used to inform the development and update of World Health Organization (WHO) guidelines. In 2018, WHO issued global guidance on non-clinical interventions to reduce unnecessary CS, with interventions designed to target three different levels or stakeholders: women, health providers, and health systems [ 27 ]. As part of the guideline recommendations, a series of systematic reviews about CS interventions were conducted: 1) a Cochrane intervention review of effectiveness by Chen et al. (2018) [ 18 ] and 2) three qualitative evidence syntheses exploring key stakeholder perspectives and experiences of interventions focusing on women and communities, health professionals, and health organisations, facilities and systems by Kingdon et al. (2018) [ 20 , 21 , 22 ]. Later on, Opiyo and colleagues (2020) published a scoping review of financial and regulatory interventions to optimise the use of CS [ 23 ].

Therefore, the primary data sources of this QCA are the intervention studies included in Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ]. We used these two systematic reviews as not only they are comprehensive, but they were also used to inform the WHO guidelines development. A single intervention study is referred to as a “case”. Eligible cases were intervention studies focusing on pregnant women and aimed to reduce or optimise the use of CS. No restrictions on study design were imposed in the QCA. Therefore, we also assessed the eligibility of intervention studies excluded from Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ] due to ineligible study designs (such as cohort study, uncontrolled before and after study, interrupted time series with fewer than three data points), as these studies could potentially show other pathways to successful implementation. We complemented these intervention studies with additional intervention studies published since the last review updates in 2018 and 2020, to include intervention studies that are likely to meet the review inclusion criteria for future review updates. No further search was conducted as QCA is suitable for medium-N cases, approximately around 10–50 cases, and inclusion of more studies may threaten study rigour [ 28 ].

Once eligible studies were selected, we searched for their ‘sibling studies’. Sibling studies are studies linked to the included intervention studies, such as formative research or process evaluations which may have been published separately. Sibling studies can provide valuable additional information about study context, intervention components, and implementation outcomes (e.g. acceptability, fidelity, adherence, dosage), which may not be well described in a single article about intervention effectiveness. We searched for sibling studies using the following steps: 1) reference list search of the intervention studies included in Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ], 2) reference list search of the qualitative studies included in Kingdon et al. (2018) reviews [ 20 , 21 , 22 ]; and 3) forward reference search of the intervention studies (through “Cited by” function) in Scopus and Web of Science. Sibling studies were included if they included any information on intervention components or implementation outcomes, regardless of the methodology used. One author conducted the study screening independently (RIZ), and 10% of the screening was double-checked by a second author (MAB). Disagreements during screening were discussed until consensus, and with the rest of the author team if needed.

Defining outcomes

We assessed all outcomes related to the mode of birth in the studies included in the Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ] reviews. Based on the consistency of outcome reporting, we selected “overall CS rate” as the primary outcome of interest due to its presence across studies. We planned to rank the rate ratio across these studies to select the 10 most successful and unsuccessful intervention studies. However, due to heterogeneity in how CS outcomes were reported across studies (e.g. odds ratios, rate ratios, percentages across different intervention stages), the final categorisation of successful or unsuccessful interventions is based on whether the CS rate decreased, based on the precision of the confidence interval or p-value (successful, coded as 1), or CS rate increased or did not change (unsuccessful, coded as 0).

Assessing risk of bias in intervention studies

All intervention studies eligible for inclusion were assessed for risk of bias. All studies included in Chen et al. (2018) and Opiyo et al. (2020) already had risk of bias assessed and reported [ 18 , 23 ], and we used these assessments. Additional intervention studies outside the included studies on these reviews were assessed using the same tools depending on the type of evidence (two randomized controlled trials and one uncontrolled before and after study), and details of the risk of bias assessment results can be found in Additional file 2 . We excluded studies with a high risk of bias to ensure that the analysis was based on high-quality studies and to enhance the ability of researchers to develop deep case knowledge by limiting the overall number of studies.

Qualitative comparative analysis (QCA)

QCA was first developed and used in political sciences and has since been extended to systematic reviews of complex health interventions [ 24 , 29 , 30 , 31 ]. Despite the term “qualitative”, QCA is not a typical qualitative analysis, and is often conceptualised as a methodology that bridges qualitative and quantitative methodologies based on its process, data used and theoretical standpoint [ 24 ]. Here, QCA is used to identify if certain configurations or combinations of intervention components (e.g. participants, types of interventions, contextual characteristics, and intervention delivery) are associated with the desired outcome [ 31 ]. These intervention components are referred to as “conditions” in the QCA methodology. Whilst statistical synthesis methods may be used to examine intervention heterogeneity in systematic reviews, such as meta-regression, QCA is a particularly suitable method to understand complex interventions like those aiming to optimise CS, as it allows for multiple overlapping pathways to causality [ 31 ]. Moreover, QCA allows the exploration of different combinations of conditions, rather than relying on a single condition leading to intervention effectiveness [ 31 ]. Although meta-regression allows for the assessment of multiple conditions, a sufficient number of studies may not be available to conduct the analysis. In complex interventions, such as interventions aiming to optimise the use of CS, single condition or standard meta-analysis may be less likely to yield usable and nuanced information about what intervention components are more or less likely to yield success [ 31 ].

QCA uses ‘set theory’ to systematically compare characteristics of the cases (e.g. intervention in the case of systematic reviews) in relation to the outcomes [ 31 , 32 ]. This means QCA compares the characteristics of the successful ‘cases’ (e.g. interventions that are effective) to those unsuccessful ‘cases’ (e.g. interventions that are not effective). The comparison is conducted using a scoring system based on ‘set membership’ [ 31 , 32 ]. In this scoring, conditions and outcomes are coded based on the extent to which a certain feature is present or absent to form set membership scores [ 31 , 32 ]. There are two scoring systems in QCA: 1) crisp set QCA (csQCA) and 2) fuzzy set QCA (fsQCA). csQCA assigns binary scores of 0 (“fully out” to set membership for cases with certain conditions) and 1 (“fully in” to set membership for cases with certain conditions), while fsQCA assigns ordinal scoring of conditions and outcomes, permitting partial membership scores between 0 and 1 [ 31 , 32 ]. For example, using fsQCA we may assign a five-level scoring system (0, 0.33, 0.5, 0.67, 1), where 0.33 would indicate “more out” than “in” to the set of membership, and 0.67 would indicate “more in” than “out”, and 0.5 would indicate ambiguity (i.e. a lack of information about whether a case was “in” or “out”) [ 31 , 32 ]. In our analysis, we used the combination of both csQCA and fsQCA to calibrate our data. This approach was necessary because some conditions were better suited to binary options using csQCA, while others were more complex, depending on the distribution of cases, and required fsQCA to capture the necessary information. In our final analysis, however, the conditions run on the final analysis were all using the csQCA scoring system.

Two relationships can be investigated using QCA [ 24 , 31 ]. First, if all instances of successful interventions share the same condition(s), this suggests these features are ‘necessary’ to trigger successful outcomes [ 24 , 31 ]. Second, if all instances of a particular condition are associated with successful interventions, this suggests these conditions are ‘sufficient’ for triggering successful outcomes [ 24 , 31 ]. In this QCA, we were interested to explore the relationship of sufficiency: that is, to assess the various combinations of intervention components that can trigger successful outcomes. We were interested in sufficiency because our logic model (explained further below) highlighted the multiple pathways that can lead to a CS and different interventions that may optimise the use of CS along those pathways, which suggested that it would be unlikely for all successful interventions to share the same conditions. We calculated the degree of sufficiency using consistency measures, which evaluate the frequency in which conditions are present when the desired outcome is achieved [ 31 , 32 ]. The conditions with a consistency score of at least 0.8 were considered sufficient in triggering successful interventions [ 31 , 32 ]. At present, there is no tool available for reporting guidelines in the re-analysis of systematic reviews using QCA, however, CARU-QCA is currently being developed for this purpose [ 33 ]. QCA was conducted using R programming software with a package developed by Thiem & Duşa (2013) and QCA with R guidebook [ 32 ]. QCA was conducted in six stages based on Thomas et al. (2014) [ 31 ] and explained below.

QCA stage 1: Identifying conditions, building data tables and calibration

We used a deductive and inductive process to determine the potential conditions (intervention components) that may trigger successful implementation. Conditions were first derived deductively using the developed logic model (Additional file 1 ). We then added additional conditions inductively using Intervention Component Analysis from the intervention studies [ 34 ], and qualitative evidence (“view”) synthesis [ 22 ] using Melendez-Torres’s (2018) approach [ 35 ]. Intervention Component Analysis is a methodological approach that examines factors affecting implementation through reflections from the trialist, which is typically presented in the discussion section of a published trial [ 34 ]. Examples of conditions identified in the Intervention Component Analysis include using an individualised approach, interaction with health providers, policies that encourage CS and acknowledgement of women’s previous birth experiences. After consolidating or merging similar conditions, a total of 52 conditions were selected and extracted from each included intervention and analysed in this QCA (Details of conditions and definitions generated for this study can be found in Additional files 3 and 4 ). We adapted the coding framework from Harris et al. (2019) [ 24 ] by adapting coding rules and six domains that were used, to organize the 52 conditions and make more sense of the data. These six domains are broadly classified as 1) context and participants, 2) intervention design, 3) program content, 4) method of engagement, 5) health system factors, and 6) process outcomes.

One author (RIZ) extracted data relevant to the conditions for each included study into a data table, which was then double-reviewed by two other authors (MVC, MAB). The data table is a matrix in which each case is represented in a row, and columns are used to represent the conditions. Following data extraction, calibration rules using either csQCA or fsQCA (e.g. group-based intervention delivery condition: yes = 1 (present), no = 0 (absent)) were developed through consultation with all authors. We developed a table listing the conditions and rules of coding the conditions, by either direct or transformational assignment of quantitative and qualitative data [ 24 , 32 ] (Additional file 3 depicts the calibration rules). The data tables were then calibrated by applying scores, to explore the extent to which interventions have ‘set membership’ with the outcome or conditions of interest. During this iterative process, the calibration criteria were explicitly defined, emerging from the literature and the cases themselves. It is important to note, that maximum ambiguity is typically scored as 0.5 in QCA, however, we decided it would be more appropriate to assume that if a condition was not reported it was unlikely to be a feature of the intervention, so we treated not reported as “absence” that is we coded it 0.

QCA stage 2: Constructing truth tables

Truth tables are an analytical tool used in QCA to analyse associations between configurations of conditions and outcomes. Whereas the data table represents individual cases (rows) and individual conditions (columns) – the truth table synthesises this data to examine configurations – with each row representing a different configuration of the conditions. The columns indicate a) which conditions are featured in the configuration in that row, b) how many of the cases are represented by that configuration, and c) their association with the outcome.

We first constructed the truth tables based on context and participants, intervention designs, program content, and method of engagement; however, no configurations to trigger successful interventions were observed. Instead, we observed limited diversity, meaning there were many instances in which the configurations were unsupported by cases, likely due to the presence of too many conditions in the truth tables. We used the learning from these truth tables to return to the literature to explore potential explanatory theories about what conditions are important from the perspectives of participants and trialists to trigger successful interventions (adhering to the ‘utilisation of view’ perspective [ 35 ]). Through this process, we found that women and communities liked to learn new information about childbirth, and desired emotional support from partners and health providers while learning [ 22 ]. They also appreciated educational interventions that provide opportunities for discussion and dialogue with health providers and align with current clinical practice and advice from health providers [ 22 ]. Therefore, three models of truth tables were iteratively constructed and developed based on three important hypothesised theories about how the interventions should be delivered: 1) how birth information was provided to women, 2) emotional support was provided to women (including interactions between women and providers), and 3) a consolidated model examining the interactions of important conditions identified from model 1 and 2. We also conducted a sub-analysis of interventions targeting both women and health providers or systems (‘multi-target interventions’). This sub-analysis was conducted to explore if similar conditions were observed in triggering successful interventions in multi-target interventions, among the components for women only. Table 1 presents the list of truth tables that were iteratively constructed and refined.

QCA stage 3: Checking quality of truth tables

We iteratively developed and improved the quality of truth tables by checking the configurations of successful and unsuccessful interventions, as recommended by Thomas et al. (2014) [ 31 ]. This includes by assessing the number of studies clustering to each configuration, and exploring the presence of any contradictory results between successful and unsuccessful interventions. We found contradictory configurations across the five truth tables, which were resolved by considering the theoretical perspectives and iteratively refining the truth tables.

QCA stage 4: Identifying parsimonious configurations through Boolean minimization

Once we determined that the truth tables were suitable for further analysis, we used Boolean minimisation to explore pathways resulting in successful intervention through the configurations of different conditions [ 31 ]. We simplified the “complex solution” of the pathways to a “parsimonious solution” and an “intermediate solution” by incorporating logical remainders (configurations where no cases were observed) [ 36 ].

QCA stage 5: Checking the quality of the solution

We presented the intermediate solution as the final solution instead of the most parsimonious solution, as it is most closely aligned with the underlying theory. We checked consistency and coverage scores to assess if the pathways identified were sufficient to trigger success. We also checked the intermediate solution by negating the outcome to see if it predicts the observed solutions.

QCA stage 6: Interpretation of solutions

We iteratively interpreted the results of the findings through discussions among the QCA team. This reflexive approach ensured that the results of the analysis considered the perspectives from the literature discourse, methodological approach, and that the results were coherent with the current understanding of the phenomenon.

Overview of included studies

Out of 79 intervention studies assessed by Chen et al. (2018) [ 18 ] and Opiyo et al. (2020) [ 23 ], 17 intervention studies targeted women and are included, comprising 11 interventions targeting only women [ 37 , 38 , 39 , 40 , 41 , 42 , 43 ] and six interventions targeting both women and health providers or systems [ 44 , 45 , 46 , 47 , 48 , 49 ]. From 17 included studies, 19 sibling studies were identified [ 43 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 ]. Thus, a total of 36 papers from 17 intervention studies are included in this QCA (See Fig.  1 : PRISMA Flowchart).

figure 1

PRISMA flowchart. *Sibling studies: studies that were conducted in the same settings, participants, and timeframe; **Intervention components: information on intervention input, activities, and outputs, including intervention context and other characteristics

The 11 interventions targeting women comprised of five successful interventions [ 37 , 68 , 69 , 70 , 71 ] and six unsuccessful interventions [ 37 , 38 , 39 , 40 , 41 , 42 , 43 ] in reducing CS. Sixteen sibling studies were identified, from five out of 11 included interventions [ 37 , 41 , 43 , 70 , 71 ]. Included studies were conducted in six countries across North America (2 from Canada [ 38 ] and 1 from United States of America [ 71 ]), Asia–Pacific (1 from Australia [ 41 ]), 5 from Iran [ 39 , 40 , 68 , 69 , 70 ]), Europe (2 from Finland [ 37 , 42 ], 1 from United Kingdom [ 43 ]). Six studies were conducted in high-income countries, while five studies were conducted in upper-middle-income countries (all from Iran). All 11 studies targeted women, with three studies also explicitly targeting women’s partners [ 68 , 69 , 71 ]. One study delivering psychoeducation allowed women to bring any family members to accompany them during the intervention but did not specifically target partners [ 37 ]. All 11 studies delivered childbirth education, with four delivering general antenatal education [ 38 , 40 , 68 , 69 ], six delivering psychoeducation [ 37 , 39 , 41 , 42 , 70 , 71 ], and one implementing decision aids [ 43 ]. All studies were included in Chen et al. (2018), and some risks of bias were identified [ 18 ] (Additional file 2).

The multi-target interventions consisted of five successful interventions [ 44 , 45 , 46 , 47 , 48 ] and one unsuccessful intervention [ 49 ]. Sibling studies were only identified from one study [ 48 ]. The interventions were delivered in five countries across: South America (1 from Brazil [ 46 ]), Asia–Pacific (4 from China [ 44 , 45 , 47 , 49 ]), Europe (1 from Italy [ 48 ], 1 from Ireland [ 48 ], and 1 from Germany [ 48 ]). Three studies were conducted in high-income countries and five studies in upper middle-income countries. The multi-target interventions targeted women, health providers and health organisations. For this analysis, however, we only consider the components of the intervention that targeted women, which was typically childbirth education. One study came from Chen et al. (2018) [ 18 ] and was graded as having some concerns [ 47 ], two studies from Opiyo et al. (2020) [ 23 ] were graded as having no serious concerns [ 45 , 46 ], and three studies are newly published studies assessed as low [ 44 ] and some concerns about risk of bias [ 48 , 49 ] Table 2 and 3 show characteristics of included studies.

The childbirth education interventions included information about mode of birth, birth process, mental health and coping strategies, pain relief methods, and partners’ roles in birth. Most interventions were delivered in group settings, and only in three studies they were delivered on a one-to-one basis [ 38 , 41 , 42 ]. Only one study explicitly stated that the intervention was individualised to a woman’s unique needs and experiences [ 38 ].

Overall, there was limited theory used to design interventions among the included studies: less than half of interventions (7/17) explicitly used theory in designing the intervention. Among the seven interventions that used theory in intervention development, the theories included the health promotion-disease prevention framework [ 38 ], midwifery counselling framework [ 41 ], cognitive behavioural therapy [ 42 ], Ost’s applied relaxation [ 70 ], conceptual model of parenting [ 71 ], attachment and social cognitive theories [ 37 ], and healthcare improvement scale-up framework [ 46 ]. The remaining 10 studies only relied on previously published studies to design the interventions. We identified very limited process evaluation or implementation outcome evidence related to the included interventions, which is a limitation of the field of CS and clinical interventions more broadly.

  • Qualitative comparative analysis

Model 1 – How birth information was provided to women

Model 1 is constructed based on the finding from Kingdon et al. (2018) [ 22 ] that women and communities enjoy learning new birth information, as it opens up new ways of thinking about vaginal birth and CS. Learning new information allows them to understand better the benefits and risks of CS and vaginal births, as well as increase their knowledge about CS [ 22 ].

We used four conditions in constructing model 1 truth table: 1) the provision of information, education, and communication (IEC) materials on what to expect during labour and birth, 2) type of education delivered (antenatal education or psychoeducation), and 3) group-based intervention delivery. We explored this model considering other conditions, such as type of information provided (e.g. information about mode of birth including birth process, mental health and coping strategies, pain relief), delivery technique (e.g. didactic, practical) and frequency and duration of intervention delivery; however these additional conditions did not result in configurations.

Of 16 possible configurations, we identified seven configurations (Table 4 ). The first two row shows perfect consistency of configurations (inclusion = 1) in five studies [ 37 , 68 , 69 , 70 , 71 ] in which all conditions are present, except antenatal education or psychoeducation. The remaining configurations are unsuccessful interventions. Interestingly, when either IEC materials or group-based intervention delivery are present (but not both), implementation is likely to be unsuccessful (rows 3–7).

Boolean minimisation identified two intermediate pathways to successful interventions (Fig.  2 ). The two pathways are similar, except for one condition: type of education. The antenatal education or psychoeducation materials is the content tailored to the type of women they target. Therefore, from the two pathways, we can see that the presence of distribution of IEC materials on birth information and group-based intervention delivery of either antenatal education to the general population of women (e.g. not groups of women with specific risks or conditions) or psychoeducation to women with fear of birth trigger successful interventions. From this solution, we can see that the successful interventions are consistently characterised by the presence of both IEC materials and group-based intervention delivery.

figure 2

Intermediate pathways from model 1 that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

Model 2 – Emotional support was provided to women

Model 2 was constructed based on the theory that women desire emotional support alongside the communication of information about childbirth [ 22 ]. This includes emotional support from husbands or partners, health professional, or doulas [ 22 ]. Furthermore, Kingdon et al. (2018) describe the importance of two-way conversation and dialogue between women and providers during pregnancy care, particularly to ensure the opportunity for discussion [ 22 ]. Interventions may generate more questions than they answered, creating the need and desire of women to have more dialogue with health professionals [ 22 ]. Women considered intervention content to be most useful when it complements clinical care, is consistent with advice from health professionals and provides a basis for more informed, meaningful dialogue between women and care providers [ 22 ].

Based on this underlying theory, we constructed model 3 truth table by considering three conditions representative of providing emotional support to women, including partner or family member involvement, group-based intervention delivery which provide social or peer support to women, and opportunity for women to interact with health providers. Of 8 possible configurations, we identified six configurations (Table 5 ). The first three rows represent successful interventions with perfect consistency (inclusion = 1). The first row shows successful interventions with all conditions present. The second and third row shows successful interventions with all conditions except partner or family member involvement or interaction with health providers. The remaining rows represent unsuccessful interventions, where at least two conditions are absent.

Boolean minimisation identified two intermediate pathways to successful interventions (Fig.  3 ). In the first pathway, the partner or family members involvement and group-based intervention delivery enable successful interventions. In the second pathway, however, when partner or family members are not involved, successful interventions can happen only when interaction with health providers is included alongside group-based intervention. From these two pathways, we can see that group-based intervention, involvement of partner and family member, and opportunity for women to interact with providers seem to be important in driving intervention success.

figure 3

Intermediate pathways from model 2 that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

Consolidated model – Essential conditions to prompt successful interventions focusing on women

Using the identified important conditions observed in models 1 and 2, we constructed a consolidated model to examine the final essential conditions which could prompt successful educational interventions targeting women. We merged and tested four conditions: the provision of IEC materials on what to expect during labour and birth, group-based intervention delivery, partner or family member involvement, and opportunity for interaction between women and health providers.

Of the 16 possible configurations, we identified six configurations (Table 6 ). The first three rows show configurations resulting in successful interventions with perfect consistency (inclusion = 1). The first row shows successful interventions with all conditions present; the second and third rows show successful interventions with all conditions present except interaction with health providers or partner or family member involvement. The remaining three rows are configurations of unsuccessful interventions, missing at least two conditions, including the consistent absence of partner or family member involvement.

Boolean minimisation identified two intermediate pathways to successful intervention (Fig.  4 ). The first pathway shows that the opportunity for women to interact with health providers, provision of IEC materials, and group-based intervention delivery prompts successful interventions. The second pathway, however, shows that when there is no opportunity for women to interact with health providers, it is important to have partner or family member involvement alongside group-based intervention delivery and provision of IEC materials. These two pathways suggest that the delivery of educational interventions accompanied by provision of IEC materials and presence of emotional support for women during the intervention is important to trigger successful interventions. These pathways also emphasise that emotional support for women during the intervention can come from either partner, family member, or health provider. For the consolidated model, we did not simplify the solution further, as the intermediate solution is more theoretically sound compared to the most parsimonious solution.

figure 4

Intermediate pathways from consolidated model that trigger successful interventions targeting pregnant women to optimise CS.  In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid.

Sub-analysis – Interventions targeting both women and health providers or systems

In this sub-analysis, we run the important conditions identified from the consolidated model, added condition of multi-target intervention, and applied it to 17 interventions: 11 interventions targeting women, and six interventions targeting both women and health providers or systems (multi-target interventions).

Of 32 possible configurations, we identified eight configurations (Table 7 ). The first four rows show configurations with successful interventions with perfect consistency (inclusion = 1). The first row is where all the multi-target interventions are clustered, except the unsuccessful intervention Zhang (2020) [ 49 ], and where all the conditions are present. All the conditions in the second to fourth rows are present, except multi-target interventions (all rows), interaction with health providers (third row) and partner and family member involvement (fourth row). The remaining rows are all configurations to unsuccessful interventions, where at least three conditions are missing, except row 8, which is a single case row. This case is the only multi-target intervention that is unsuccessful and in which partner or family members were not involved.

The Boolean minimisation identified two intermediate pathways (Fig.  5 ). The first pathway shows that partner or family involvement, provision of IEC materials, and group-based intervention delivery prompt successful interventions. The first pathway is comprised of all five successful multi-target interventions [ 44 , 45 , 46 , 47 , 48 ] and four of 11 interventions targeting only women [ 37 , 68 , 69 , 71 ]. The second pathway shows that when multi-target interventions are absent, but when interaction with health providers is present, alongside provision of IEC materials and group-based intervention delivery, it prompts successful interventions (3/11 interventions targeting women only [ 37 , 69 , 70 ]). The first pathway shows that there are successful configurations with and without multi-target interventions. Therefore, similar to the interventions targeting women, when implementing multi-target interventions, intervention components targeting women are more likely to be successful when partners or family members are involved, interventions are implemented through group-based intervention delivery, IEC materials were provided, and there is an opportunity for women to interact with health providers.

figure 5

Intermediate pathways from multi-target interventions sub-analysis that trigger successful interventions targeting pregnant women to optimise CS. In QCA, asterisk (*) denotes an ‘AND’ relationship; Inclusion score (InclS), also known as consistency, indicates the degree to which the evidence is consistent with the hypothesis that there is sufficient relation between the configuration and the outcome; Proportional Reduction in Inconsistency (PRI) refers to the extent in which a configuration is sufficient in triggering successful outcome as well as the negation of the outcome; Coverage score (CovS) refers to percentage of cases in which the configuration is valid

To summarise, there are four essential intervention components which trigger successful educational interventions focusing on pregnant women to reduce CS, this includes 1) group-based intervention delivery, 2) provision of IEC materials on what to expect during labour and birth, 3) partner or family member involvement on the intervention, and 4) opportunity for women to interact with health providers. These conditions do not work in siloed or independently but instead work jointly as parts of configurations to enable successful interventions.

Our extensive QCA identified configurations of essential intervention components which are sufficient to trigger successful interventions to optimised CS. Educational interventions focusing on women were successful by: 1) leveraging social or peer support through group-based intervention delivery, 2) improving women’s knowledge and awareness of what to expect during labour and birth, 3) ensuring women have emotional support through partner or family participation in the intervention, and 4) providing opportunities for women to interact with health providers. We found that the absence of two or more of the above characteristics in an intervention result in unsuccessful interventions. Unlike our logic model, which predicted engagement strategies (i.e. intensity, frequency, technique, recruitment, incentives) to be essential to intervention success, we found that “support” seems to be central in maximising benefits of interventions targeting women.

Group-based intervention delivery is present across all four truth tables and eight pathways leading to successful intervention implementation, suggesting that group-based intervention delivery is an essential component of interventions targeting women. Despite this, we cannot conclude that group-based intervention delivery is a necessary condition, as there may be other pathways not captured in this QCA. The importance of group-based intervention delivery may be due to the group setting providing women with a sense of confidence through peer support and engagement. In group-based interventions, women may feel more confident when learning with others and peer support may motivate women. Furthermore, all group-based interventions in our included studies are conducted at health facilities, which may provide women with more confidence that information is aligned with clinical recommendations. Evidence on benefits of group-based interventions involving women who are pregnant has been demonstrated previously [ 72 , 73 ]. Women reported that group-based interventions reduce their feelings of isolation, provide access to group support, and allow opportunities for them to share their experiences [ 72 , 74 , 75 , 76 ]. This is aligned with social support theory, in which social support through a group or social environment may provide women with feelings of reassurance, compassion, reduce feelings of uncertainty, increase sense of control, access to new contacts to solve problems, and provision of instrumental support, which eventually influence positive health behaviours [ 72 , 77 ]. Women may resolve their uncertainties around mode of birth by sharing their concerns with others and learning at the same time how others cope with it. These findings are consistent with the benefits associated with group-based antenatal care, which is recommended by WHO [ 78 , 79 ].

Kingdon et al. (2018) reported that women and communities liked learning new birth information, as it opens new ways of thinking about vaginal birth and CS, and educates about benefits of different modes of birth, including risks of CS. Our QCA is aligned with this finding where provision of information about birth through education delivery leads to successful interventions but with certain caveats. That is, provision of birth information should be accompanied by IEC materials and through group-based intervention delivery. There is not enough information to distinguish what type of IEC materials lead to successful intervention; however, it is important to note that the format of the IEC materials (such as paper-based or mobile application) may affect success. More work is needed to understand how women and families react to format of IEC materials; for example, will paper-based IEC materials be relegated over more modern methods of reaching women with information through digital applications? The QUALI-DEC (Quality decision-making (QUALI-DEC) by women and healthcare providers for appropriate use of caesarean section) study is currently implementing a decision-analysis tool to help women make an informed decision on preferred mode of birth using both a paper-based and mobile application that may shed some light on this [ 80 ].

Previous research has shown that women who participated in interventions aiming to reduce CS desired emotional support (from partners, doulas or health providers) alongside the communication about childbirth [ 22 ]. Our QCA is aligned with this finding in which emotional support from partners or family members is highly influential in leading to successful interventions. Partner involvement in maternity care has been extensively studied and has been demonstrated to improve maternal health care utilisation and outcomes [ 81 ]. Both women and their partners perceived that partner involvement is crucial as it facilitates men to learn directly from providers, thus promoting shared decision-making among women and partners and enabling partners to reinforce adherence to any beneficial suggestions [ 82 , 83 , 84 , 85 , 86 ]. Partners provide psychosocial support to women, for example through being present during pregnancy and the childbirth process, as well as instrumental support, which includes supporting women financially [ 82 , 83 , 84 ]. Despite the benefits of partner involvement, partner's participation in maternity care is still low [ 82 ], as reflected in this study where only four out of 11 included interventions on this study involved partner or family member involvement. Reasons for this low participation, which include unequal gender norms and limited health system capability [ 82 , 84 , 85 , 86 ], should be explored and addressed to ensure the benefits of the interventions.

Furthermore, our QCA demonstrates the importance of interaction with health providers to trigger successful interventions. The interaction of women with providers in CS decision-making, however, is on a “nexus of power, trust, and risk”, where it may be beneficial but can also reinforce the structural oppression of women [ 13 ]. A recent study on patient-provider interaction in CS decision-making concluded that the interaction between providers who are risk-averse, and women who are cautious about their pregnancies in the health system results in discouragement of vaginal births [ 87 ]. However, this decision could be averted by meaningful communication between women and providers where CS risks and benefits are communicated in an environment where vaginal birth is encouraged [ 87 ]. Furthermore, the reasons women desire interaction with providers can come from opposite directions. Some women see providers as the most trusted and knowledgeable source, in which women can trust the judgement and ensure that the information learned is reliable and evidenced-based [ 22 ]. On the other hand, some women may have scepticism towards providers where women understand that providers’ preference may negatively influence their preferred mode of birth [ 22 ]. Therefore, adequate, two-way interaction is important for women to build a good rapport with providers.

It is also important to note that we have limited evidence (3/17 intervention studies) involving women with previous CS. Vaginal birth after previous CS (VBAC) can be a safe and positive experience for some women, but there are also potential risks depending on their obstetric history [ 88 , 89 , 90 ]. Davis (2020) found that women were motivated to have VBAC due to negative experiences of CS, such as the difficult recovery, and that health providers' roles served as pivotal drivers in motivating women towards VBAC [ 91 ]. Other than this, VBAC also requires giving birth in a suitably staffed and equipped maternity unit, with staff trained on VBAC, equipment for labour monitoring, and resources for emergency CS if needed [ 89 , 90 ]. There is comparatively less research conducted on VBAC and trial of labour after CS [ 88 ]. Therefore, more work is needed to explore if there are potentially different pathways that lead to successful intervention implementation for women with previous CS. It may be more likely that interventions targeting various stakeholders are more crucial in this group of women. For example, both education for women and partners or families, as well as training to upskill health providers might be needed to support VBAC.

Strength and limitations

We found many included studies had poor reporting of the interventions, including the general intervention components (e.g. presence of policies that may support interventions) and process evaluation components, which is reflective of the historical approach to reporting trial data. This poor reporting means we could not engage further in the interventions and thus may have missed important conditions that were not reported. However, we have attempted to compensate for limited process evaluation components by identifying all relevant sibling studies that could contribute to a better understanding of context. Furthermore, there are no studies conducted in low-income countries, despite rapidly increasing CS rates in these settings. Lastly, we were not able to conduct more nuanced analyses about CS, such as exploring how CS interventions impacted changes to emergency versus elective CS, VBAC, or instrumental birth, due to an insufficient number of studies and heterogeneity in outcome measurements. Therefore, it is important to note that we are not necessarily measuring the optimal outcome of interest—reducing unnecessary CS. However, it is unlikely that these non-clinical interventions will interfere with a decision of CS based on clinical indications.

Despite these limitations, this is the first study aiming to understand how certain interventions can be successful in targeting women to optimise CS use. We used the QCA approach and new analytical frameworks to re-analyse existing systematic review evidence to generate new knowledge. We ensure robustness through the use of a logic model and worked backwards in understanding what aspects are different in the intervention across different outcomes. The use of QCA and qualitative evidence synthesis ensured that the results are theory-driven, incorporate participants’ perspectives into the analysis, and explored iteratively to find the appropriate configurations, reducing the risk of data fishing. Lastly, this QCA extends the understanding of effectiveness review conducted by Chen et al. (2018) [ 18 ] by explaining the potential intervention components which may influence heterogeneity.

Implications for practice and research

To aid researchers and health providers to reduce CS in their contexts and designing educational interventions targeting women during pregnancy, we have developed a checklist of key components or questions to consider when designing the interventions that may help lead to successful implementation:

Is the intervention delivered in a group setting?

Are IEC materials on what to expect during labour and birth disseminated to women?

Are women’s partners or families involved in the intervention?

Do women have opportunities to interact with health providers?

We have used this checklist to explore the extent to which the included interventions in our QCA include these components using a matrix model (Fig.  6 ).

figure 6

Matrix model assessing the extent to which the included intervention studies have essential intervention components identified in the QCA

Additionally, future research on interventions to optimise the use of CS should report the intervention components implemented, including process outcomes such as fidelity, attrition, contextual factors (e.g. policies, details of how the intervention is delivered), and stakeholder factors (e.g. women’s perceptions and satisfaction). These factors are important in not just evaluating whether the intervention is successful or not, but also in exploring why similar interventions can work in one but not in another context. There is also a need for more intervention studies implementing VBAC to reduce CS, to understand how involving women with previous CS may result in successful interventions. Furthermore, more studies understanding impact of the interventions targeting women in LMICs are needed.

This QCA illustrates crucial intervention components and potential pathways that can trigger successful educational interventions to optimise CS, focusing on pregnant women. The following intervention components are found to be sufficient in triggering successful outcomes: 1) group-based delivery, 2) provision of IEC materials, 3) partner or family member involvement, and 4) opportunity for women to interact with health providers. These intervention components do not work in siloed or independently but instead work jointly as parts of configurations to enable successful interventions. Researchers, trialists, hospitals, or other institutions and stakeholders planning interventions focusing on pregnant women can consider including these components to ensure benefits. More studies understanding impact of the interventions targeting women to optimise CS are needed from LMICs. Researchers should clearly describe and report intervention components in trials, and consider how process evaluations can help explain why trials were successful or not. More robust trial reporting and process evaluations can help to better understand mechanisms of action and why interventions may work in one context yet not another.

Availability of data and materials

Additional information files have been provided and more data may be provided upon request to [email protected].

Abbreviations

Coverage score

  • Caesarean section

Crisp set qualitative comparative analysis

Fuzzy set qualitative comparative analysis

Information, education, and communication

Inclusion score

Low- and middle-income countries

Proportional reduction in inconsistency

Quality decision-making by women and healthcare providers for appropriate use of caesarean section

Vaginal birth after previous caesarean section

World Health Organization

World Health Organization. WHO statement on caesarean section rates. Available from: https://www.who.int/publications/i/item/WHO-RHR-15.02 . Cited 20 Sept 2023.

Zahroh RI, Disney G, Betrán AP, Bohren MA. Trends and sociodemographic inequalities in the use of caesarean section in Indonesia, 1987–2017. BMJ Global Health. 2020;5:e003844. https://doi.org/10.1136/bmjgh-2020-003844 .

Article   PubMed   PubMed Central   Google Scholar  

Betran AP, Ye J, Moller A-B, Souza JP, Zhang J. Trends and projections of caesarean section rates: global and regional estimates. BMJ Global Health. 2021;6:e005671. https://doi.org/10.1136/bmjgh-2021-005671 .

Boerma T, Ronsmans C, Melesse DY, Barros AJD, Barros FC, Juan L, et al. Global epidemiology of use of and disparities in caesarean sections. The Lancet. 2018;392:1341–8. https://doi.org/10.1016/S0140-6736(18)31928-7 .

Article   Google Scholar  

Sandall J, Tribe RM, Avery L, Mola G, Visser GH, Homer CS, et al. Short-term and long-term effects of caesarean section on the health of women and children. Lancet. 2018;392:1349–57. https://doi.org/10.1016/S0140-6736(18)31930-5 .

Article   PubMed   Google Scholar  

Abenhaim HA, Tulandi T, Wilchesky M, Platt R, Spence AR, Czuzoj-Shulman N, et al. Effect of Cesarean Delivery on Long-term Risk of Small Bowel Obstruction. Obstet Gynecol. 2018;131:354–9. https://doi.org/10.1097/AOG.0000000000002440 .

Gurol-Urganci I, Bou-Antoun S, Lim CP, Cromwell DA, Mahmood TA, Templeton A, et al. Impact of Caesarean section on subsequent fertility: a systematic review and meta-analysis. Hum Reprod. 2013;28:1943–52. https://doi.org/10.1093/humrep/det130 .

Article   CAS   PubMed   Google Scholar  

Hesselman S, Högberg U, Råssjö E-B, Schytt E, Löfgren M, Jonsson M. Abdominal adhesions in gynaecologic surgery after caesarean section: a longitudinal population-based register study. BJOG: An Int J Obstetrics Gynaecology. 2018;125:597–603. https://doi.org/10.1111/1471-0528.14708 .

Article   CAS   Google Scholar  

Tita ATN, Landon MB, Spong CY, Lai Y, Leveno KJ, Varner MW, et al. Timing of elective repeat cesarean delivery at term and neonatal outcomes. N Engl J Med. 2009;360:111–20. https://doi.org/10.1056/NEJMoa0803267 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wilmink FA, Hukkelhoven CWPM, Lunshof S, Mol BWJ, van der Post JAM, Papatsonis DNM. Neonatal outcome following elective cesarean section beyond 37 weeks of gestation: a 7-year retrospective analysis of a national registry. Am J Obstet Gynecol. 2010;202(250):e1-8. https://doi.org/10.1016/j.ajog.2010.01.052 .

Keag OE, Norman JE, Stock SJ. Long-term risks and benefits associated with cesarean delivery for mother, baby, and subsequent pregnancies: Systematic review and meta-analysis. PLoS Med. 2018;15:e1002494. https://doi.org/10.1371/journal.pmed.1002494 .

Ye J, Betrán AP, Guerrero Vela M, Souza JP, Zhang J. Searching for the optimal rate of medically necessary cesarean delivery. Birth. 2014;41:237–44. https://doi.org/10.1111/birt.12104 .

Eide KT, Morken N-H, Bærøe K. Maternal reasons for requesting planned cesarean section in Norway: a qualitative study. BMC Pregnancy Childbirth. 2019;19:102. https://doi.org/10.1186/s12884-019-2250-6 .

Long Q, Kingdon C, Yang F, Renecle MD, Jahanfar S, Bohren MA, et al. Prevalence of and reasons for women’s, family members’, and health professionals’ preferences for cesarean section in China: A mixed-methods systematic review. PLoS Med. 2018;15. https://doi.org/10.1371/journal.pmed.1002672 .

McAra-Couper J, Jones M, Smythe L. Caesarean-section, my body, my choice: The construction of ‘informed choice’ in relation to intervention in childbirth. Fem Psychol. 2012;22:81–97. https://doi.org/10.1177/0959353511424369 .

Panda S, Begley C, Daly D. Clinicians’ views of factors influencing decision-making for caesarean section: A systematic review and metasynthesis of qualitative, quantitative and mixed methods studies. PLoS One 2018;13. https://doi.org/10.1371/journal.pone.0200941 .

Takegata M, Smith C, Nguyen HAT, Thi HH, Thi Minh TN, Day LT, et al. Reasons for increased Caesarean section rate in Vietnam: a qualitative study among Vietnamese mothers and health care professionals. Healthcare. 2020;8:41. https://doi.org/10.3390/healthcare8010041 .

Chen I, Opiyo N, Tavender E, Mortazhejri S, Rader T, Petkovic J, et al. Non-clinical interventions for reducing unnecessary caesarean section. Cochrane Database Syst Rev. 2018. https://doi.org/10.1002/14651858.CD005528.pub3 .

Catling-Paull C, Johnston R, Ryan C, Foureur MJ, Homer CSE. Non-clinical interventions that increase the uptake and success of vaginal birth after caesarean section: a systematic review. J Adv Nurs. 2011;67:1662–76. https://doi.org/10.1111/j.1365-2648.2011.05662.x .

Kingdon C, Downe S, Betran AP. Non-clinical interventions to reduce unnecessary caesarean section targeted at organisations, facilities and systems: Systematic review of qualitative studies. PLOS ONE. 2018;13:e0203274. https://doi.org/10.1371/journal.pone.0203274 .

Kingdon C, Downe S, Betran AP. Interventions targeted at health professionals to reduce unnecessary caesarean sections: a qualitative evidence synthesis. BMJ Open. 2018;8:e025073. https://doi.org/10.1136/bmjopen-2018-025073 .

Kingdon C, Downe S, Betran AP. Women’s and communities’ views of targeted educational interventions to reduce unnecessary caesarean section: a qualitative evidence synthesis. Reprod Health. 2018;15:130. https://doi.org/10.1186/s12978-018-0570-z .

Opiyo N, Young C, Requejo JH, Erdman J, Bales S, Betrán AP. Reducing unnecessary caesarean sections: scoping review of financial and regulatory interventions. Reprod Health. 2020;17:133. https://doi.org/10.1186/s12978-020-00983-y .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

World Health Organization. Robson Classifcation: Implementation Manual. 2017. Available from: https://www.who.int/publications/i/item/9789241513197 . Cited 20 Sept 2023.

Zahroh RI, Kneale D, Sutcliffe K, Vazquez Corona M, Opiyo N, Homer CSE, et al. Interventions targeting healthcare providers to optimise use of caesarean section: a qualitative comparative analysis to identify important intervention features. BMC Health Serv Res. 2022;22:1526. https://doi.org/10.1186/s12913-022-08783-9 .

World Health Organization. WHO recommendations: non-clinical interventions to reduce unnecessary caesarean sections. 2018. Available from: https://www.who.int/publications/i/item/9789241550338 . Cited 20 Sept 2023.

Hanckel B, Petticrew M, Thomas J, Green J. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health. 2021;21:877. https://doi.org/10.1186/s12889-021-10926-2 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: Re-analysis of a systematic review to identify pathways to effectiveness. Health Expect. 2018;21:574–84. https://doi.org/10.1111/hex.12667 .

Chatterley C, Javernick-Will A, Linden KG, Alam K, Bottinelli L, Venkatesh M. A qualitative comparative analysis of well-managed school sanitation in Bangladesh. BMC Public Health. 2014;14:6. https://doi.org/10.1186/1471-2458-14-6 .

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3:67. https://doi.org/10.1186/2046-4053-3-67 .

Dușa A. QCA with R: A Comprehensive Resource. 2021. Available from: https://bookdown.org/dusadrian/QCAbook/ . Cited 20 Sept 2023.

Kneale D, Sutcliffe K, Thomas J. Critical Appraisal of Reviews Using Qualitative Comparative Analyses (CARU-QCA): a tool to critically appraise systematic reviews that use qualitative comparative analysis. In: Abstracts of the 26th Cochrane Colloquium, Santiago, Chile. Cochrane Database of Systematic Reviews 2020;(1 Suppl 1). https://doi.org/10.1002/14651858.CD201901 .

Sutcliffe K, Thomas J, Stokes G, Hinds K, Bangpan M. Intervention Component Analysis (ICA): a pragmatic approach for identifying the critical features of complex interventions. Syst Rev. 2015;4:140. https://doi.org/10.1186/s13643-015-0126-z .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Thomas J. Developing and testing intervention theory by incorporating a views synthesis into a qualitative comparative analysis of intervention effectiveness. Res Synth Methods. 2019;10:389–97. https://doi.org/10.1002/jrsm.1341 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45. https://doi.org/10.1186/1471-2288-8-45 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Saisto T. Obstetric outcome after intervention for severe fear of childbirth in nulliparous women – randomised trial. BJOG: An Int J Obstetrics Gynaecology. 2013;120:75–84. https://doi.org/10.1111/1471-0528.12011 .

Fraser W, Maunsell E, Hodnett E, Moutquin JM. Randomized controlled trial of a prenatal vaginal birth after cesarean section education and support program Childbirth alternatives Post-Cesarean study group. Am J Obstet Gynecol. 1997;176:419–25. https://doi.org/10.1016/s0002-9378(97)70509-x .

Masoumi SZ, Kazemi F, Oshvandi K, Jalali M, Esmaeili-Vardanjani A, Rafiei H. Effect of training preparation for childbirth on fear of normal vaginal delivery and choosing the type of delivery among pregnant women in Hamadan, Iran: a randomized controlled trial. J Family Reprod Health. 2016;10:115–21.

PubMed   PubMed Central   Google Scholar  

Navaee M, Abedian Z. Effect of role play education on primiparous women’s fear of natural delivery and their decision on the mode of delivery. Iran J Nurs Midwifery Res. 2015;20:40–6.

Fenwick J, Toohill J, Gamble J, Creedy DK, Buist A, Turkstra E, et al. Effects of a midwife psycho-education intervention to reduce childbirth fear on women’s birth outcomes and postpartum psychological wellbeing. BMC Pregnancy Childbirth. 2015;15:284. https://doi.org/10.1186/s12884-015-0721-y .

Saisto T, Salmela-Aro K, Nurmi J-E, Könönen T, Halmesmäki E. A randomized controlled trial of intervention in fear of childbirth. Obstet Gynecol. 2001;98:820–6. https://doi.org/10.1016/S0029-7844(01)01552-6 .

Montgomery AA, Emmett CL, Fahey T, Jones C, Ricketts I, Patel RR, et al. Two decision aids for mode of delivery among women with previous Caesarean section: randomised controlled trial. BMJ: British Medic J. 2007;334:1305–9.

Xia X, Zhou Z, Shen S, Lu J, Zhang L, Huang P, et al. Effect of a two-stage intervention package on the cesarean section rate in Guangzhou, China: A before-and-after study. PLOS Medicine. 2019;16:e1002846. https://doi.org/10.1371/journal.pmed.1002846 .

Yu Y, Zhang X, Sun C, Zhou H, Zhang Q, Chen C. Reducing the rate of cesarean delivery on maternal request through institutional and policy interventions in Wenzhou. China PLoS ONE. 2017;12:1–12. https://doi.org/10.1371/journal.pone.0186304 .

Borem P, de Cássia SR, Torres J, Delgado P, Petenate AJ, Peres D, et al. A quality improvement initiative to increase the frequency of Vaginal delivery in Brazilian hospitals. Obstet Gynecol. 2020;135:415–25. https://doi.org/10.1097/AOG.0000000000003619 .

Ma R, Lao Terence T, Sun Y, Xiao H, Tian Y, Li B, et al. Practice audits to reduce caesareans in a tertiary referral hospital in south-western China. Bulletin World Health Organiz. 2012;90:488–94. https://doi.org/10.2471/BLT.11.093369 .

Clarke M, Devane D, Gross MM, Morano S, Lundgren I, Sinclair M, et al. OptiBIRTH: a cluster randomised trial of a complex intervention to increase vaginal birth after caesarean section. BMC Pregnancy Childbirth. 2020;20:143. https://doi.org/10.1186/s12884-020-2829-y .

Zhang L, Zhang L, Li M, Xi J, Zhang X, Meng Z, et al. A cluster-randomized field trial to reduce cesarean section rates with a multifaceted intervention in Shanghai. China BMC Medicine. 2020;18:27. https://doi.org/10.1186/s12916-020-1491-6 .

Fenwick J, Gamble J, Creedy DK, Buist A, Turkstra E, Sneddon A, et al. Study protocol for reducing childbirth fear: a midwife-led psycho-education intervention. BMC Pregnancy Childbirth. 2013;13:190. https://doi.org/10.1186/1471-2393-13-190 .

Toohill J, Fenwick J, Gamble J, Creedy DK, Buist A, Turkstra E, et al. A randomized controlled trial of a psycho-education intervention by midwives in reducing childbirth fear in pregnant women. Birth. 2014;41:384–94. https://doi.org/10.1111/birt.12136 .

Toohill J, Callander E, Gamble J, Creedy D, Fenwick J. A cost effectiveness analysis of midwife psycho-education for fearful pregnant women – a health system perspective for the antenatal period. BMC Pregnancy Childbirth. 2017;17:217. https://doi.org/10.1186/s12884-017-1404-7 .

Turkstra E, Mihala G, Scuffham PA, Creedy DK, Gamble J, Toohill J, et al. An economic evaluation alongside a randomised controlled trial on psycho-education counselling intervention offered by midwives to address women’s fear of childbirth in Australia. Sex Reprod Healthc. 2017;11:1–6. https://doi.org/10.1016/j.srhc.2016.08.003 .

Emmett CL, Shaw ARG, Montgomery AA, Murphy DJ, DiAMOND study group. Women’s experience of decision making about mode of delivery after a previous caesarean section: the role of health professionals and information about health risks. BJOG 2006;113:1438–45. https://doi.org/10.1111/j.1471-0528.2006.01112.x .

Emmett CL, Murphy DJ, Patel RR, Fahey T, Jones C, Ricketts IW, et al. Decision-making about mode of delivery after previous caesarean section: development and piloting of two computer-based decision aids. Health Expect. 2007;10:161–72. https://doi.org/10.1111/j.1369-7625.2006.00429.x .

Hollinghurst S, Emmett C, Peters TJ, Watson H, Fahey T, Murphy DJ, et al. Economic evaluation of the DiAMOND randomized trial: cost and outcomes of 2 decision aids for mode of delivery among women with a previous cesarean section. Med Decis Making. 2010;30:453–63. https://doi.org/10.1177/0272989X09353195 .

Frost J, Shaw A, Montgomery A, Murphy D. Women’s views on the use of decision aids for decision making about the method of delivery following a previous caesarean section: Qualitative interview study. BJOG : An Int J Obstetrics Gynaecology. 2009;116:896–905. https://doi.org/10.1111/j.1471-0528.2009.02120.x .

Rees KM, Shaw ARG, Bennert K, Emmett CL, Montgomery AA. Healthcare professionals’ views on two computer-based decision aids for women choosing mode of delivery after previous caesarean section: a qualitative study. BJOG. 2009;116:906–14. https://doi.org/10.1111/j.1471-0528.2009.02121.x .

Emmett CL, Montgomery AA, Murphy DJ. Preferences for mode of delivery after previous caesarean section: what do women want, what do they get and how do they value outcomes? Health Expect. 2011;14:397–404. https://doi.org/10.1111/j.1369-7625.2010.00635.x .

Bastani F, Hidarnia A, Montgomery KS, Aguilar-Vafaei ME, Kazemnejad A. Does relaxation education in anxious primigravid Iranian women influence adverse pregnancy outcomes?: a randomized controlled trial. J Perinat Neonatal Nurs. 2006;20:138–46. https://doi.org/10.1097/00005237-200604000-00007 .

Feinberg ME, Kan ML. Establishing Family Foundations: Intervention Effects on Coparenting, Parent/Infant Well-Being, and Parent-Child Relations. J Fam Psychol. 2008;22:253–63. https://doi.org/10.1037/0893-3200.22.2.253 .

Me F, Ml K, Mc G. Enhancing coparenting, parenting, and child self-regulation: effects of family foundations 1 year after birth. Prevention Science: Official J Soc Prevention Res. 2009;10. https://doi.org/10.1007/s11121-009-0130-4 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Saisto T. Life satisfaction, general well-being and costs of treatment for severe fear of childbirth in nulliparous women by psychoeducative group or conventional care attendance. Acta Obstet Gynecol Scand. 2015;94:527–33. https://doi.org/10.1111/aogs.12594 .

Rouhe H, Salmela-Aro K, Toivanen R, Tokola M, Halmesmäki E, Ryding E-L, et al. Group psychoeducation with relaxation for severe fear of childbirth improves maternal adjustment and childbirth experience–a randomised controlled trial. J Psychosom Obstet Gynaecol. 2015;36:1–9. https://doi.org/10.3109/0167482X.2014.980722 .

Healy P, Smith V, Savage G, Clarke M, Devane D, Gross MM, et al. Process evaluation for OptiBIRTH, a randomised controlled trial of a complex intervention designed to increase rates of vaginal birth after caesarean section. Trials. 2018;19:9. https://doi.org/10.1186/s13063-017-2401-x .

Clarke M, Savage G, Smith V, Daly D, Devane D, Gross MM, et al. Improving the organisation of maternal health service delivery and optimising childbirth by increasing vaginal birth after caesarean section through enhanced women-centred care (OptiBIRTH trial): study protocol for a randomised controlled trial (ISRCTN10612254). Trials. 2015;16:542. https://doi.org/10.1186/s13063-015-1061-y .

Lundgren I, Healy P, Carroll M, Begley C, Matterne A, Gross MM, et al. Clinicians’ views of factors of importance for improving the rate of VBAC (vaginal birth after caesarean section): a study from countries with low VBAC rates. BMC Pregnancy Childbirth. 2016;16:350. https://doi.org/10.1186/s12884-016-1144-0 .

Sharifirad G, Rezaeian M, Soltani R, Javaheri S, Mazaheri MA. A survey on the effects of husbands’ education of pregnant women on knowledge, attitude, and reducing elective cesarean section. J Educ Health Promotion. 2013;2:50. https://doi.org/10.4103/2277-9531.119036 .

Valiani M, Haghighatdana Z, Ehsanpour S. Comparison of childbirth training workshop effects on knowledge, attitude, and delivery method between mothers and couples groups referring to Isfahan health centers in Iran. Iran J Nurs Midwifery Res. 2014;19:653–8.

Bastani F, Hidarnia A, Kazemnejad A, Vafaei M, Kashanian M. A randomized controlled trial of the effects of applied relaxation training on reducing anxiety and perceived stress in pregnant women. J Midwifery Womens Health. 2005;50:e36-40. https://doi.org/10.1016/j.jmwh.2004.11.008 .

Feinberg ME, Roettger ME, Jones DE, Paul IM, Kan ML. Effects of a psychosocial couple-based prevention program on adverse birth outcomes. Matern Child Health J. 2015;19:102–11. https://doi.org/10.1007/s10995-014-1500-5 .

Evans K, Spiby H, Morrell CJ. Developing a complex intervention to support pregnant women with mild to moderate anxiety: application of the medical research council framework. BMC Pregnancy Childbirth. 2020;20:777. https://doi.org/10.1186/s12884-020-03469-8 .

Rising SS. Centering pregnancy. An interdisciplinary model of empowerment. J Nurse Midwifery. 1998;43:46–54. https://doi.org/10.1016/s0091-2182(97)00117-1 .

Breustedt S, Puckering C. A qualitative evaluation of women’s experiences of the Mellow Bumps antenatal intervention. British J Midwife. 2013;21:187–94. https://doi.org/10.12968/bjom.2013.21.3.187 .

Evans K, Spiby H, Morrell JC. Non-pharmacological interventions to reduce the symptoms of mild to moderate anxiety in pregnant women a systematic review and narrative synthesis of women’s views on the acceptability of and satisfaction with interventions. Arch Womens Ment Health. 2020;23:11–28. https://doi.org/10.1007/s00737-018-0936-9 .

Hoddinott P, Chalmers M, Pill R. One-to-one or group-based peer support for breastfeeding? Women’s perceptions of a breastfeeding peer coaching intervention. Birth. 2006;33:139–46. https://doi.org/10.1111/j.0730-7659.2006.00092.x .

Heaney CA, Israel BA. Social networks and social support. In Glanz K, Rimer BK, Viswanath K (Eds.), Health behavior and health education: Theory, research, and practice. Jossey-Bass; 2008. pp. 189–210. https://psycnet.apa.org/record/2008-17146-009 .

World Health Organization. WHO recommendations on antenatal care for a positive pregnancy experience. 2016. Available from: https://www.who.int/publications/i/item/9789241549912 . Cited 20 Sept 2023.

World Health Organization. WHO recommendation on group antenatal care. WHO - RHL. 2021. Available from: https://srhr.org/rhl/article/who-recommendation-on-group-antenatal-care . Cited 20 Sept 2023.

Dumont A, Betrán AP, Kabore C, de Loenzien M, Lumbiganon P, Bohren MA, et al. Implementation and evaluation of nonclinical interventions for appropriate use of cesarean section in low- and middle-income countries: protocol for a multisite hybrid effectiveness-implementation type III trial. Implementation Science 2020. https://doi.org/10.21203/rs.3.rs-35564/v2 .

Tokhi M, Comrie-Thomson L, Davis J, Portela A, Chersich M, Luchters S. Involving men to improve maternal and newborn health: A systematic review of the effectiveness of interventions. PLOS ONE. 2018;13:e0191620. https://doi.org/10.1371/journal.pone.0191620 .

Gibore NS, Bali TAL. Community perspectives: An exploration of potential barriers to men’s involvement in maternity care in a central Tanzanian community. PLOS ONE. 2020;15:e0232939. https://doi.org/10.1371/journal.pone.0232939 .

Galle A, Plaieser G, Steenstraeten TV, Griffin S, Osman NB, Roelens K, et al. Systematic review of the concept ‘male involvement in maternal health’ by natural language processing and descriptive analysis. BMJ Global Health. 2021;6:e004909. https://doi.org/10.1136/bmjgh-2020-004909 .

Ladur AN, van Teijlingen E, Hundley V. Male involvement in promotion of safe motherhood in low- and middle-income countries: a scoping review. Midwifery. 2021;103:103089. https://doi.org/10.1016/j.midw.2021.103089 .

Comrie-Thomson L, Tokhi M, Ampt F, Portela A, Chersich M, Khanna R, et al. Challenging gender inequity through male involvement in maternal and newborn health: critical assessment of an emerging evidence base. Cult Health Sex. 2015;17:177–89. https://doi.org/10.1080/13691058.2015.1053412 .

Article   PubMed Central   Google Scholar  

Comrie-Thomson L, Gopal P, Eddy K, Baguiya A, Gerlach N, Sauvé C, et al. How do women, men, and health providers perceive interventions to influence men’s engagement in maternal and newborn health? A qualitative evidence synthesis. Soc Scie Medic. 2021;291:114475. https://doi.org/10.1016/j.socscimed.2021.114475 .

Doraiswamy S, Billah SM, Karim F, Siraj MS, Buckingham A, Kingdon C. Physician–patient communication in decision-making about Caesarean sections in eight district hospitals in Bangladesh: a mixed-method study. Reprod Health. 2021;18:34. https://doi.org/10.1186/s12978-021-01098-8 .

Dodd JM, Crowther CA, Huertas E, Guise J-M, Horey D. Planned elective repeat caesarean section versus planned vaginal birth for women with a previous caesarean birth. Cochrane Database Syst Rev. 2013. https://doi.org/10.1002/14651858.CD004224.pub3 .

Royal College of Obstetricians and Gynaecologists. Birth After Previous Caesarean Birth:Green-top Guideline No. 45. 2015. Available from: https://www.rcog.org.uk/globalassets/documents/guidelines/gtg_45.pdf . Cited 20 Sept 2023.

Royal Australian and New Zealand College of Obstetricians and Gynaecologists. Birth after previous caesarean section. 2019. Available from: https://ranzcog.edu.au/RANZCOG_SITE/media/RANZCOG-MEDIA/Women%27s%20Health/Statement%20and%20guidelines/Clinical-Obstetrics/Birth-after-previous-Caesarean-Section-(C-Obs-38)Review-March-2019.pdf?ext=.pdf . Cited 20 Sept 2023.

Davis D, Homer CS, Clack D, Turkmani S, Foureur M. Choosing vaginal birth after caesarean section: Motivating factors. Midwifery. 2020;88:102766. https://doi.org/10.1016/j.midw.2020.102766 .

Download references

Acknowledgements

We extend our thanks to Jim Berryman (Brownless Medical Library, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne) for his help in refining the search strategy for sibling studies.

This research was made possible with the support of UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), a co-sponsored programme executed by the World Health Organization (WHO). RIZ is supported by Melbourne Research Scholarship and Human Rights Scholarship from The University of Melbourne. CSEH is supported by a National Health and Medical Research Council (NHMRC) Principal Research Fellowship. MAB’s time is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200100264) and a Dame Kate Campbell Fellowship (University of Melbourne Faculty of Medicine, Dentistry, and Health Sciences). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The contents of this publication are the responsibility of the authors and do not reflect the views of the UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization.

Author information

Authors and affiliations.

Gender and Women’s Health Unit, Nossal Institute for Global Health, School of Population and Global Health, University of Melbourne, Melbourne, VIC, Australia

Rana Islamiah Zahroh, Martha Vazquez Corona & Meghan A. Bohren

EPPI Centre, UCL Social Research Institute, University College London, London, UK

Katy Sutcliffe & Dylan Kneale

Department of Sexual and Reproductive Health and Research, UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland

Ana Pilar Betrán & Newton Opiyo

Maternal, Child, and Adolescent Health Programme, Burnet Institute, Melbourne, VIC, Australia

Caroline S. E. Homer

You can also search for this author in PubMed   Google Scholar

Contributions

- Conceptualisation and study design: MAB, APB, RIZ

- Funding acquisition: MAB, APB

- Data curation: RIZ, MAB, MVC

- Investigation, methodology and formal analysis: all authors

- Visualisation: RIZ, MAB

- Writing – original draft preparation: RIZ, MAB

- Writing – review and editing: all authors

Corresponding author

Correspondence to Rana Islamiah Zahroh .

Ethics declarations

Ethics approval and consent to participate.

This study utilised published and openly available data, and thus ethics approval is not required.

Consent for publication

No direct individual contact is involved in this study, therefore consent for publication is not needed.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Logic model in optimizing CS use.

Additional file 2.

Risk of bias assessments.

Additional file 3.

Coding framework and calibration rules.

Additional file 4.

Coding framework as applied to each intervention (data table).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zahroh, R.I., Sutcliffe, K., Kneale, D. et al. Educational interventions targeting pregnant women to optimise the use of caesarean section: What are the essential elements? A qualitative comparative analysis. BMC Public Health 23 , 1851 (2023). https://doi.org/10.1186/s12889-023-16718-0

Download citation

Received : 07 March 2022

Accepted : 07 September 2023

Published : 23 September 2023

DOI : https://doi.org/10.1186/s12889-023-16718-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Maternal health
  • Complex intervention
  • Intervention implementation

BMC Public Health

ISSN: 1471-2458

comparative analysis of research papers

comparative analysis of research papers

International Journal For Multidisciplinary Research

A widely indexed open access peer reviewed multidisciplinary bi-monthly scholarly international journal.

Volume 6 Issue 2 March-April 2024

Submit your research paper

Academia

Friedrich Nietzsche’s Philosophy of Übermensch (Overman) and Muhammad Iqbal’s Philosophy of Insan-i-Kamal (Perfect Man): Comparative Analysis

comparative analysis of research papers

CrossRef DOI is assigned to each research paper published in our journal.

IJFMR DOI prefix is 10.36948/ijfmr

All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License , and all rights belong to their respective authors/researchers.

CC-BY-SA

IMAGES

  1. FREE 9+ Comparative Research Templates in PDF

    comparative analysis of research papers

  2. ⇉Comparative Research Analysis Essay Example

    comparative analysis of research papers

  3. Comparative analysis essay

    comparative analysis of research papers

  4. How to Conduct Comparative Analysis? Guide with Examples

    comparative analysis of research papers

  5. (PDF) A Comparative analysis of two ESP research papers: A schema-based

    comparative analysis of research papers

  6. Comparative Research

    comparative analysis of research papers

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. Comparative Analysis Tools for Professionals

  3. Comparative Analysis of Tech Company Valuations in 2022

  4. A Comparative Analysis of Parliamentary Performance: Mehbooba Mufti vs Farooq Abdullah

  5. COMPARATIVE ANALYSIS OF ORGANIZATIONAL CULTURES

  6. Comparative Analysis Presentation- Malaysia and USA

COMMENTS

  1. Comparative Analysis

    Framing . Framing multi-source writing assignments (comparative analysis, research essays, multi-modal projects) is likely to overlap a great deal with "Why It's Useful" (see above), because the range of reasons why we might use these kinds of writing in academic or non-academic settings is itself the reason why they so often appear later in courses.

  2. PDF How to Write a Comparative Analysis

    There are two basic ways to organize the body of your paper. In text-by-text, you discuss all of A, then all of B. In point-by-point, you alternate points about A with comparable points about B. If you think that B extends A, you'll probably use a text-by-text scheme; if you see A and B engaged in debate, a point-by-point scheme will draw ...

  3. (PDF) A Short Introduction to Comparative Research

    Comparative research or analysis is a broad term that includes both quantitative and . ... his more widely quoted essay, "In Comparison a Magic Dwells" (S mith, 1982). In .

  4. How to Do Comparative Analysis in Research ( Examples )

    Comparative analysis is a method that is widely used in social science. It is a method of comparing two or more items with an idea of uncovering and discovering new ideas about them. It often compares and contrasts social structures and processes around the world to grasp general patterns. Comparative analysis tries to understand the study and ...

  5. What is Comparative Analysis? Guide with Examples

    A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets. For instance, you could use comparative ...

  6. Comparative Research Methods

    A recent synthesis by Esser and Hanitzsch ( 2012a) concluded that comparative communication research involves comparisons between a minimum of two macro-level cases (systems, cultures, markets, or their sub-elements) in which at least one object of investigation is relevant to the field of communication.

  7. Comparing and Contrasting in an Essay

    Making effective comparisons. As the name suggests, comparing and contrasting is about identifying both similarities and differences. You might focus on contrasting quite different subjects or comparing subjects with a lot in common—but there must be some grounds for comparison in the first place. For example, you might contrast French ...

  8. A Step-by-Step Guide to Writing a Comparative Analysis

    Organize information. It is important to structure your comments for your readers to want to read your comparative analysis. The idea is to make it easy for your readers to navigate your paper and get them to find the information that interests them quickly. 5. End with a conclusion.

  9. Comparative Analysis

    Comparative analysis is a multidisciplinary method, which spans a wide cross-section of disciplines (Azarian, 2011).It is the process of comparing multiple units of study for the purpose of scientific discovery and for informing policy decisions (Rogers, 2014).Even though there has been a renewed interest in comparative analysis as a research method over the last decade in fields such as ...

  10. Comparative Analysis

    Definition. The goal of comparative analysis is to search for similarity and variance among units of analysis. Comparative research commonly involves the description and explanation of similarities and differences of conditions or outcomes among large-scale social units, usually regions, nations, societies, and cultures.

  11. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  12. (PDF) Four Varieties of Comparative Analysis

    Comparative analysis methods consist of four different types methods which are individualizing, universalizing, variating finding and encompassing. According to Pickvance, C. (2001 ...

  13. Qualitative Comparative Analysis in Education Research: Its Current

    Qualitative comparative analysis (QCA), a set-theoretic configurational approach based on Boolean algebra, was initially introduced more than 30 years ago and has since been developed largely through the work of Charles Ragin (1987, 2000, 2008).QCA constitutes one of the few genuine methodological innovations in the social sciences over the past decades (Gerring, 2001), and its potential has ...

  14. What Is Comparative Analysis and How to Conduct It?

    Comparative analysis is a systematic approach used to evaluate and compare two or more entities, variables, or options to identify similarities, differences, and patterns. It involves assessing the strengths, weaknesses, opportunities, and threats associated with each entity or option to make informed decisions.

  15. The use of Qualitative Comparative Analysis (QCA) to address causality

    Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA ...

  16. (PDF) Methods of comparative analysis

    The one-parameter comparison allows for obtaining stable general qualitative and quantitative comparison estimates. The multi-parameter comparison allows for obtaining general qualitative ...

  17. A Comparative Analysis: Research Proposals vs. Papers

    This article presents a comparative analysis between research proposals and papers. It is well-known that both of these documents are essential for the academic career, however, understanding their differences can help researchers utilize them to further pursue their goals. This paper aims to provide an overview of the main structural features ...

  18. A Comparative Analysis: Research Paper vs. Proposal

    A research paper: A research paper is meant to present facts about an issue with supportive evidence from reliable sources such as peer reviewed journals, books, magazines and more. A research proposal: On the other hand, a research proposal outlines potential solutions for an identified problem or challenges experienced by others in the same ...

  19. Comparative Genre Analysis

    Comparative Genre Analysis. In the University Writing Seminar, the Comparative Genre Analysis (CGA) unit asks students to read writing from varying disciplines. The goal of the CGA is to prepare students for writing in their courses across the disciplines, as well as in their future careers.

  20. Comparative analysis of deep learning image detection algorithms

    A computer views all kinds of visual media as an array of numerical values. As a consequence of this approach, they require image processing algorithms to inspect contents of images. This project compares 3 major image processing algorithms: Single Shot Detection (SSD), Faster Region based Convolutional Neural Networks (Faster R-CNN), and You Only Look Once (YOLO) to find the fastest and most ...

  21. A Comparative Analysis of Two Published Research Papers

    The following assignment will present a comparative analysis of two published research papers. It will examine the approaches used; theoretical and philosophical assumptions and the wider socio-political context of each piece, and provide a balanced and informed judgement regarding the strengths and weaknesses of available research approaches by way of ethical analysis.

  22. Electronics

    This research paper conducts a comprehensive comparative analysis of various ML techniques applied to NILM, aiming to identify the most effective methodologies for accurate load disaggregation. The study employs a diverse dataset comprising high-resolution electricity consumption data collected from an Estonian household.

  23. Comparative Analysis Research Papers

    A Method for Website Usability Evaluation: A Comparative Analysis. Graphical user interfaces design in software development process focuses on maximizing usability and the user's experience, in order to make the interaction for users easy, flexible and efficient. In this paper, we propose an approach... more. Download.

  24. Educational interventions targeting pregnant women to optimise the use

    QCA stage 3: Checking quality of truth tables. We iteratively developed and improved the quality of truth tables by checking the configurations of successful and unsuccessful interventions, as recommended by Thomas et al. (2014) [].This includes by assessing the number of studies clustering to each configuration, and exploring the presence of any contradictory results between successful and ...

  25. Comparative Simulation Analysis of the Effects of Single ...

    Compared with the previous single-atrium layout form, this paper focuses on the indoor thermal environment of a multi-atrium layout form under the same conditions. Based on two subjective and four physical quantities, PMV and PPD indices were calculated using the simulation software RHINO to compare and analyze the effects of single and ...

  26. Design Modification and Comparative Analysis of Glycol‐Based Natural

    DOI: 10.1002/appl.202300093 Corpus ID: 268592620; Design Modification and Comparative Analysis of Glycol‐Based Natural Gas Dehydration Plant @article{Wosu2024DesignMA, title={Design Modification and Comparative Analysis of Glycol‐Based Natural Gas Dehydration Plant}, author={C.O Wosu and J.G Akpa and Animia Ajor Wordu and Emmanuel O. Ehirim and E.M Ezeh}, journal={Applied Research}, year ...

  27. A Comparative analysis of two ESP research papers: A schema-based

    A Comparative analysis of two ESP research papers: ... A leading study conducted by Khodadady (2008) was an analysis of 22 authentic and un- modi ed magazine and newspaper articles, all sharing a ...

  28. Friedrich Nietzsche's Philosophy of Übermensch (Overman) and Muhammad

    The aim of this paper is to highlight the concept of man in the philosophy of Friedrich Nietzsche and Mohammad Iqbal and present the comparative analysis of both philosophers. Nietzsche introduced the idea of Übermensch (Overman or Superman) in his philosophy of human person in which all humans are called to become superman (Übermensch).

  29. Comparative analysis of protein kinase G in the decapod crustacean

    Explore Research Products in the NSF-PAR. ... Conference Paper: ... Head, T., and Mykles, D. "Comparative analysis of protein kinase G in the decapod crustacean molting gland". Integrative and comparative biology (). Country unknown/Code not available: Oxford University Press.