Evaluating Research in Academic Journals: A Practical Guide to Realistic Evaluation

  • November 2018
  • Edition: 7th edition
  • Publisher: Routledge (Taylor & Francis)
  • ISBN: 978-0815365662
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Maria Tcherni-Buzzeo at University of New Haven

  • University of New Haven

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Aisha Siddiqua

  • Victoria Espinoza

Maria Tcherni-Buzzeo

  • Jesus Alberto Galvan
  • Dorothea Ivey

Raymond I Nelson

  • TOBIAS CHACHA OLAMBO
  • DR. MOSES ODHIAMBO ALUOCH (PhD)

Rob Davidson

  • Catherine Briand

Khayreddine Bouabida

  • Carol Giba Bottger Garcia
  • Kofar Wambai

Chek Derashid

  • J BUS FINAN ACCOUNT
  • Chek Derashid
  • J AM COLL HEALTH

Jennifer Cremeens Matthews

  • C. Nathan DeWall
  • J. Michael Bartels

Eunsun Kwon

  • Sojung Park

Jinho Kim

  • J APPL DEV PSYCHOL

Paul Klaczynski

  • Sandra A. Brown
  • COMPUT HUM BEHAV
  • Alberta Contarello

Mauro Sarrica

  • Barker Bausell

Terance Miethe

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

What is a Hypothesis

What is a Hypothesis – Types, Examples and...

Appendices

Appendices – Writing Guide, Types and Examples

Theoretical Framework

Theoretical Framework – Types, Examples and...

Research Design

Research Design – Types, Methods and Examples

Data collection

Data Collection – Methods Types and Examples

Assignment

Assignment – Types, Examples and Writing Guide

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Ann Fam Med
  • v.6(4); 2008 Jul

Evaluative Criteria for Qualitative Research in Health Care: Controversies and Recommendations

Deborah j. cohen.

Department of Family Medicine, Research Division, University of Medicine and Dentistry, Robert Wood Johnson Medical School, Somerset, New Jersey

Benjamin F. Crabtree

PURPOSE We wanted to review and synthesize published criteria for good qualitative research and develop a cogent set of evaluative criteria.

METHODS We identified published journal articles discussing criteria for rigorous research using standard search strategies then examined reference sections of relevant journal articles to identify books and book chapters on this topic. A cross-publication content analysis allowed us to identify criteria and understand the beliefs that shape them.

RESULTS Seven criteria for good qualitative research emerged: (1) carrying out ethical research; (2) importance of the research; (3) clarity and coherence of the research report; (4) use of appropriate and rigorous methods; (5) importance of reflexivity or attending to researcher bias; (6) importance of establishing validity or credibility; and (7) importance of verification or reliability. General agreement was observed across publications on the first 4 quality dimensions. On the last 3, important divergent perspectives were observed in how these criteria should be applied to qualitative research, with differences based on the paradigm embraced by the authors.

CONCLUSION Qualitative research is not a unified field. Most manuscript and grant reviewers are not qualitative experts and are likely to embrace a generic set of criteria rather than those relevant to the particular qualitative approach proposed or reported. Reviewers and researchers need to be aware of this tendency and educate health care researchers about the criteria appropriate for evaluating qualitative research from within the theoretical and methodological framework from which it emerges.

INTRODUCTION

Until the 1960s, the scientific method—which involves hypothesis testing through controlled experimentation—was the predominant approach to research in the natural, physical, and social sciences. In the social sciences, proponents of qualitative research argued that the scientific method was not an appropriate model for studying people (eg, Cicourel, 1 Schutz, 2 , 3 and Garfinkel 4 ), and such methods as observation and interviewing would lead to a better understanding of social life in its naturally occurring, uncontrolled form. Biomedical and clinical research, with deep historical roots in quantitative methods, particularly observational epidemiology 5 and clinical trials, 6 was on the periphery of this debate. It was not until the late 1960s and 1970s that anthropologists and sociologists began introducing qualitative research methods into the health care field. 4 , 7 – 14

Since that time, qualitative research methods have been increasingly used in clinical and health care research. Today, both journals (eg, Qualitative Health Research ) and books are dedicated to qualitative methods in health care, 15 – 17 and a vast literature describes basic approaches of qualitative research, 18 , 19 as well as specific information on focus groups, 20 – 23 qualitative content analysis, 24 observation and ethnography, 25 – 27 interviewing, 28 – 32 studying stories 33 , 34 and conversation, 35 – 37 doing case study, 38 , 39 and action research. 40 , 41 Publications describe strategies for sampling, 42 – 45 analyzing, reporting, 45 – 49 and combining qualitative and quantitative methods 50 ; and a growing body of health care research reports findings from studies using in-depth interviews, 51 – 54 focus groups, 55 – 57 observation, 58 – 60 and a range of mixed-methods designs. 61 – 63

As part of a project to evaluate health care improvements, we identified a need to help health care researchers, particularly those with limited experience in qualitative research, evaluate and understand qualitative methodologies. Our goals were to review and synthesize published criteria for “good” qualitative research and develop a cogent set of evaluative criteria that would be helpful to researchers, reviewers, editors, and funding agencies. In what follows, we identify the standards of good qualitative research articulated in the health care literature and describe the lessons we learned as part of this process.

A series of database searches were conducted to identify published journal articles, books, and book chapters offering criteria for evaluating and identifying rigorous qualitative research.

Data Collection and Management

With the assistance of a librarian, a search was conducted in December 2005 using the Institute for Science (ISI) Web of Science database, which indexes a wide range of journals and publications from 1980 to the present. Supplemental Appendix 1, available online-only at http://www.annfammed.org/cgi/content/full/6/4/331/DC1 , describes our search strategy. This search yielded a preliminary database of 4,499 publications. Citation information, abstracts, and the number of times the article was cited by other authors were exported to a Microsoft Excel file and an Endnote database.

After manually reviewing the Excel database, we found and removed a large number of irrelevant publications in the physical and environmental sciences (eg, forestry, observational studies of crystals), and further sorted the remaining publications to identify publications in health care. Among this subset, we read abstracts and further sorted publications into (1) publications about qualitative methods, and (2) original research using qualitative methods. For the purposes of this analysis, we reviewed in detail only publications in the first category. We read each publication in this group and further subdivided the group into publications that (1) articulated criteria for evaluating qualitative research, (2) addressed techniques for doing a particular qualitative method (eg, interviewing, focus groups), or (3) described a qualitative research strategy (eg, sampling, analysis). Subsequent analyses focused on the first category; however, among publications in the second category, a number of articles addressed the issue of quality in, for example, case study, 39 interviewing, 28 focus groups, 22 , 64 , 65 discourse, 66 and narrative 67 , 68 research that we excluded as outside the scope of our analysis.

Books and book chapters could not be searched in the same way because a database cataloging these materials did not exist. Additionally, few books on qualitative methods are written specifically for health care researchers, so we would not be able to determine whether a book was or was not contributing to the discourse in this field. To overcome these challenges, we used a snowball technique, identifying and examining books and book chapters cited in the journal articles retrieved. Through this process, a number of additional relevant journal articles were identified as frequently cited but published in non–health care or nonindexed journals (eg, online journals). These articles were included in our analysis.

We read journal articles and book chapters and prepared notes recording the evaluative criteria that author(s) posited and the world view or belief system in which criteria were embedded, if available. When criteria were attributed to another work, this information was noted. Books were reviewed and analyzed differently. We read an introductory chapter or two to understand the authors’ beliefs about research and prepared summary notes. Because most books contained a section discussing evaluative criteria, we identified and read this section, and prepared notes in the manner described above for journal articles and book chapters.

An early observation was that not all publications offered explicit criteria. Publications offering explicit evaluative criteria were treated as a group. Publications by the same author were analyzed and determined to be sufficiently similar to cluster. We examined evaluative criteria across publications, listing similar criteria in thematic clusters (eg, importance of research, conducting ethically sound research), identifying the central principle or theme of the cluster, and reviewing and refining clusters. Publications that discussed evaluative criteria for qualitative research but did not offer explicit criteria were analyzed separately.

Preliminary findings were synthesized into a Web site for the Robert Wood Johnson Foundation ( http://www.qualres.org ). This Web site was reviewed by Mary Dixon-Woods, PhD, a health care researcher with extensive expertise in qualitative research, whose feedback regarding the implications of endorsing or positing a unified set of evaluative criteria encouraged our reflection and influenced this report.

We identified 29 journal articles 19 , 26 , 45 , 69 – 94 and 16 books or book chapters 95 – 110 that offered explicit criteria for evaluating the quality of qualitative research. Supplemental Appendix 2, available online-only at http://www.annfammed.org/cgi/content/full/6/4/331/DC1 , contains a table listing citation information and criteria posited in these works. An additional 29 publications were identified that did not offer explicit criteria but informed discourse on this topic and our analysis. 111 – 139

Seven evaluative criteria were identified: (1) carrying out ethical research; (2) importance of the research; (3) clarity and coherence of the research report; (4) use of appropriate and rigorous methods; (5) importance of reflexivity or attending to researcher bias; (6) importance of establishing validity or credibility; and (7) importance of verification or reliability. There was general agreement observed across publications on the first 4 quality dimensions; however, on the last 3 criteria, disagreement was observed in how the concepts of researcher bias, validity, and reliability should be applied to qualitative research. Differences in perspectives were grounded in paradigm debates regarding the nature of knowledge and reality, with some arguing from an interpretivist perspective and others from a more pragmatic realist perspective. Three major paradigms and their implications are described in Table 1 ▶ .

Common Paradigms in Health Care Research

PositivismThere is a real world of objects apart from people
Researchers can know this reality and use symbols to accurately describe, represent and explain this reality
Researchers can compare their claims against this objective reality. This allows for prediction, control, and empirical verification
RealismThere are real-world objects apart from people
Researchers can only know reality from their perspective of it
We cannot separate ourselves from what we know; however, objectivity is an ideal researchers strive for through careful sampling and specific techniques
It is possible to evaluate the extent to which objectivity or truth is attained. This can be evaluated by a community of scholars and those who are studied
InterpretivismReality as we know it is constructed intersubjectively. Meaning and under- standing are developed socially and experientially
We cannot separate ourselves from what we know. Who we are and how we understand the world are linked
Researchers’ values are inherent in all phases of research. Truth is negotiated through dialogue
Findings or knowledge claims are created as an investigation proceeds and emerge through dialogue and negotiations of meanings among community members (both scholars and the community at large)
All interpretations are located in a particular context, setting, and moment

Fundamental Criteria

It was widely agreed that qualitative research should be ethical, be important, be clearly and coherently articulated, and use appropriate and rigorous methods. Conducting ethically sound research involved carrying out research in a way that was respectful, 69 humane, 95 and honest, 77 and that embodied the values of empathy, collaboration, and service. 77 , 84 Research was considered important when it was pragmatically and theoretically useful and advanced the current knowledge base. * Clarity and coherence of the research report were criteria emphasizing that the report itself should be concise and provide a clear and adequate description of the research question, background and contextual material, study design (eg, study participants, how they were chosen, how data are collected and analyzed), and rationale for methodological choices. Description of the data should be unexaggerated, and the relationship between data and interpretation should be understandable. †

Researcher Bias

The majority of publications discussed issues of researcher bias, recognizing researchers’ preconceptions, motivations, and ways of seeing shape the qualitative research process. (It should be noted there is ample evidence to suggest researcher motivations and preconceptions shape all research.) 140 One perspective (interpretivist) viewed researcher subjectivity as “something used actively and creatively through the research process” rather than as a problem of bias. 72 A hallmark of good research was understanding and reporting relevant preconceptions through reflexive processing (ie, reflective journal-keeping). ‡ A second perspective (realist) viewed researcher bias as a problem affecting the trustworthiness, truthfulness, or validity of the account. In addition to understanding researchers’ motivations and preconceptions, value and rigor were enhanced by controlling bias through techniques to verify and confirm findings, as discussed in more detail below. * Thus, whereas all publications agreed that researcher bias was an important consideration, the approach for managing bias was quite different depending on the paradigm grounding the work.

A number of publications framed the concept of validity in the context of quantitative research, where it typically refers to the “best available approximation to the truth or falsity of propositions.” 142 (p37) Internal validity refers to truth about claims made regarding the relationship between 2 variables. External validity refers to the extent to which we can generalize findings. Across publications, different ideas emerged.

Understanding the concept of validity requires understanding beliefs about the nature of reality. One may believe that there can be multiple ways of understanding social life and reality, even multiple realities. This view of reality emerges from an interpretivist perspective. Hallmarks of high-quality qualitative research include producing a rich, substantive account with strong evidence for inferences and conclusions and then reporting the lived experiences of those observed and their perspectives on social reality, while recognizing that these could be multiple and complex and that the researcher is intertwined in the portrayal of this experience. The goal is understanding and providing a meaningful account of the complex perspectives and realities studied. †

In contrast, research may be based on the belief that there is one reality that can be observed, and this reality is knowable through the process of research, albeit sometimes imperfectly. This perspective is typically associated with a positivist paradigm that underlies quantitative research, but also with the realist paradigm found in some qualitative research. Qualitative research based on this view tends to use alternative terms for validity (eg, adequacy, trustworthiness, accuracy, credibility) and emphasizes striving for truth through the qualitative research process, for example, by having outside auditors or research participants validate findings. An important dimension of good qualitative research, therefore, is plausibility and accuracy. ‡

Verification or Reliability

Divergent perspectives were observed on the appropriateness of applying the concept of verifiability or reliability when evaluating qualitative research. As is validity, this concept is rooted in quantitative and experimental methods and refers to the extent to which measures and experimental treatments are standardized and controlled to reduce error and decrease the chance of obtaining differences. 142 Two distinct approaches to evaluating the reliability of qualitative research were articulated. In the first, verification was a process negotiated between researchers and readers, where researchers were responsible for reporting information (eg, data excerpts, how the researcher dealt with tacit knowledge, information about the interpretive process) so readers could discern for themselves the patterns identified and verify the data, its analysis and interpretation. § This interpretivist perspective contrasts with the second, realist, perspective. Rather than leaving the auditing and confirming role to the reader, steps to establish dependability should be built into the research process to repeat and affirm researchers’ observations. In some cases, special techniques, such as member checking, peer review, debriefing, and external audits to achieve reliability, are recommended and posited as hallmarks of quality in qualitative research. || In Table 2 ▶ we provide a brief description of these techniques.

Verification Techniques Used in Qualitative Research

TriangulationUsing multiple data sources in an investigation to produce understanding
Peer review/ debriefingThe “process of exposing oneself to a disinterested peer in a manner paralleling an analytical session and for the purpose of exploring aspects of the inquiry that might otherwise remain only implicit within the inquirer’s mind”
External audits/ auditingAuditing involves having a researcher not involved in the research process examine both the process and product of the research study. The purpose is to evaluate the accuracy and evaluate whether the findings, interpretations, and conclusions are supported by the data
Member checkingData, analytic categories, interpretations, and conclusions are tested with members of those groups from whom the data were originally obtained. This can be done both formally and informally, as opportunities for member checks may arise during the normal course of observation and conversation

Perspectives on the Value of Criteria

Health care researchers also discuss the usefulness of evaluative criteria. We observed 3 perspectives on the utility of having unified criteria for assessing qualitative research.

One perspective recognized the importance of validity and reliability as criteria for evaluating qualitative research. 132 , 133 Morse et al make the case that without validity and reliability, qualitative research risks being seen as nonscientific and lacking rigor. 88 , 125 Their argument is compelling and suggests reliability and validity should not be evaluated at the end of the project, but should be goals that shape the entire research process, influencing study design, data collection, and analysis choices. A second approach is to view the criteria of validity and reliability as inappropriate for qualitative research, and argue for the development of alternative criteria relevant for assessing qualitative research. *

This position is commonly based on the premise that the theoretical and methodological beliefs informing quantitative research (from whence the criteria of reliability and validity come) are not the same as the methodological and theoretical beliefs informing qualitative research and are, therefore, inappropriate. 136 Cogent criteria for evaluating qualitative research are needed. Without well-defined, agreed-upon, and appropriate standards, qualitative research risks being evaluated by quantitative standards, which can lead to assimilation, preferences for qualitative research that are most compatible with quantitative standards, and rejection of more radical methods that do not conform to quantitative criteria. 94 From this perspective emerged a number of alternative criteria for evaluating qualitative research.

Alternative criteria have been open to criticism. We observed such criticism in publications challenging the recommendation that qualitative research using such techniques as member checking, multiple coding, external audits, and triangulation is more reliable, valid, and of better quality. 72 , 82 , 90 , 91 , 112 , 127 , 143 Authors challenging this recommendation show how techniques such as member checking can be problematic. For example, it does not make sense to ask study participants to check or verify audio-recorded transcribed data. In other situations, study participants asked to check or verify data may not recall what they said or did. Even when study participants recall their responses, there are a number of factors that may account for discrepancies between what participants recall and the researcher’s data and preliminary findings. For instance, the purpose of data analysis is to organize individual statements into themes that produce new, higher-order insights. Individual contributions may not be recognizable to participants, and higher-order insights might not make sense. 82 Similar issues have been articulated about the peer-review and auditing processes 127 , 143 and some uses of triangulation. 130 Thus, alternative criteria for evaluating qualitative research have been posited and criticized on the grounds that such criteria (1) cannot be applied in a formulaic manner; (2) do not necessarily lead to higher-quality research, particularly if these techniques are poorly implemented; and (3) foster the false expectation among evaluators of research that use of one or more of these techniques in a study is a mark of higher quality. 72 , 81 , 90 , 91 , 112 , 123 , 127

A third approach suggests the search for a cogent set of evaluative criteria for qualitative research is misguided. The field of qualitative research is broad and diverse, not lending itself to evaluation by one set of criteria. Instead, researchers need to recognize each study is unique in its theoretical positioning and approach, and different evaluative criteria are needed. To fully understand the scientific quality of qualitative research sometimes requires a deep understanding of the theoretical foundation and the science of the approach. Thus, evaluating the scientific rigor of qualitative research requires learning, understanding, and using appropriate evaluative criteria. 123 , 124 , 135 , 137

There are a number of limitations of this analysis to be acknowledged. First, although we conducted a comprehensive literature review, it is always possible for publications to be missed, particularly with our identification of books and book chapters, which relied on a snowball technique. In addition, relying on publications and works cited within publications to understand the dialogue about rigor in qualitative methods is imperfect. Although these discussions manifest in the literature, they also arise at conferences, grant review sessions, and hallway conversations. One’s views are open to revision (cf, Lincoln’s 103 , 144 ), and relationships with editors and others shape our ideas and whom we cite. In this analysis, we cannot begin to understand these influences.

Our perspectives affect this report. Both authors received doctoral training in qualitative methods in social science disciplines (sociology/communication and anthropology) and have assimilated these values into health care as reviewers, editors, and active participants in qualitative health care studies. Our training shapes our beliefs, so we feel most aligned with interpretivism. This grounding influences how we see qualitative research, as well as the perspectives and voices we examine in this analysis. We have been exposed to a wide range of theoretical and methodological approaches for doing qualitative research, which may make us more inclined to notice the generic character of evaluative criteria emerging in the health care community and take note of the potential costs of this approach.

In addition, we use 3 common paradigms—interpretivism, realism, and positivism—in our analysis. It is important to understand that paradigms and debates about paradigms are political and used to argue for credibility and resources in the research community. In this process, underlying views about the nature of knowledge and reality have been simplified, sometimes even dichotomized (interpretivism vs positivism). We recognize our use of these paradigms as an oversimplification and limitation of our work, but one that is appropriate if only because these categories are so widely used in the works we analyze.

Our analysis reveals some common ground has been negotiated with regard to establishing criteria for rigorous qualitative research. It is important to notice that the criteria that have been widely accepted—carrying out ethical research and important research, preparing a clear and coherent research report, and using appropriate and rigorous methods—are applicable to all research. Divergent perspectives were observed in the field with regard to 3 criteria: researcher bias, validity, and verification or reliability. These criteria are more heavily influenced by quantitative and experimental approaches 142 and, not surprisingly, have met with resistance. To understand the implications of these influences, our analysis suggests the utility of examining how these criteria are embedded in beliefs about the nature of knowledge and reality.

Central to the interpretivist paradigm, which historically grounds most qualitative traditions, is the assumption that realities are multiple, fluid, and co-constructed, and knowledge is taken to be negotiated between the observer and participants. From this framework emerge evaluative criteria valuing research that illuminates subjective meanings and understands and articulates multiple ways of seeing a phenomenon. Rich substance and content, clear delineation of the research process, evidence of immersion and self-reflection, and demonstration of the researcher’s way of knowing, particularly with regard to tacit knowledge, are essential features of high-quality research.

In contrast, fundamental to a positivist paradigm, which historically grounds most quantitative approaches, is the assumption that there is a single objective reality and the presumption that this reality is knowable. The realist paradigm softens this belief by suggesting knowledge of reality is always imperfect. Within the realist framework the goal of qualitative research is to strive for attaining truth, and good research is credible, confirmable, dependable, and transferable. Thus, rigorous qualitative research requires more than prolonged engagement, persistent observation, thick description, and negative case analysis, but it should use such techniques as triangulation, external auditing, and member checking to promote attainment of truth or validity through the process of verifying findings.

One reason for the centrality of the realist paradigm in health care research may be its ability to assimilate the values, beliefs, and criteria for rigorous research that emerge from the positivist paradigm. In a community that values biomedical bench research, sees the randomized controlled trial as a reference standard, holds a belief in an objective reality, and values research that is reliable, valid, and generalizable (typically positivist ideals), it is not surprising that realist views with regard to qualitative research have found favor. Unlike interpretivism, realism adopts a philosophy of science not at odds with the commonly held ideals of positivism. By maintaining a belief in an objective reality and positing truth as an ideal qualitative researchers should strive for, realists have succeeded at positioning the qualitative research enterprise as one that can produce research which is valid, reliable, and generalizable, and therefore, of value and import equal to quantitative biomedical research.

Although qualitative research emerging from a realist paradigm may have successfully assimilated into the clinical research community (as it has in other disciplines), it may be at a cost. Qualitative approaches most compatible with traditional values of quantitative research may be most likely to be accepted (published and funded). More radical methods (eg, feminist standpoint research, critical postmodern research), which can make innovative contributions to the field, may be marginalized because they do not fit the evaluative criteria that have emerged in the health care community. 94 , 115 In addition, doing rigorous qualitative research in the way realists prescribe involves using a number of techniques that may foster the appearance of validity and reliability, but can be problematic if inappropriately applied. *

The search for a single set of criteria for good qualitative research is grounded in the assumption that qualitative research is a unified field. 124 , 135 , 137 , 145 Qualitative research is grounded in a range of theoretical frameworks and uses a variety of methodological approaches to guide data collection and analysis. Because most manuscript and grant reviewers are not qualitative experts, they are likely to embrace a generic set of criteria. Reviewers and researchers need to be aware of the 7 criteria for good qualitative research, but also they need to be aware that applying the same standards across all qualitative research is inappropriate. Helping reviewers understand how an unfamiliar qualitative approach should be executed and standards for evaluating quality are essential, because reviewers, even qualitative experts, might not be well-versed in the particular qualitative method being used or proposed. Panel organizers and editors need to recognize that a qualitative expert may have only a very narrow range of expertise. Moreover, some researchers may be so entrenched in the dogma of their own approach that they are unable to value qualitative methods dissimilar from their own. This type of ax grinding harms not only the efforts of qualitative researchers, but the field more generally.

Future work needs to focus on educating health care researchers about the criteria for evaluating qualitative research from within the appropriate theoretical and methodological framework. Although the ideas posited here suggest there may be a connection between how quality is defined and the kind of work published or funded, this assumption is worthy of empirical examination. In addition, the field needs to reflect on the value of qualitative health care research and consider whether we have the space and models for adequately reporting interpretive research in our medical journals.

Acknowledgments

We are indebted to Mary Dixon-Woods, PhD, for her insightful comments on earlier versions of this work.

Conflicts of interest: none reported

Funding support: Preparation of this report was supported by a grant from the Robert Wood Johnson Foundation (#053512).

* References 26 , 69 , 70 , 73 , 77 , 80 , 94 , 95 , 98 , 106 .

† References 19 , 26 , 69 , 70 , 73 , 75 , 77 , 84 , 85 , 87 , 95 , 107 .

‡ References 19 , 69 , 70 , 72 , 73 , 77 , 80 – 82 , 87 , 94 , 103 , 105 .

* References 19 , 45 , 71 , 74 , 78 , 79 , 83 , 87 , 96 , 101 – 106 , 108 , 141 .

† References 69 , 72 , 76 , 77 , 80 – 82 , 89 , 95 , 96 .

‡ References 45 , 70 , 71 , 73 , 74 , 78 , 79 , 83 , 86 , 87 , 90 , 91 , 93 , 96 , 98 , 100 – 108 , 141 .

§ References 69 , 70 , 72 , 81 , 82 , 89 , 95 , 109 , 110 .

|| References 19 , 45 , 71 , 73 , 74 , 76 , 78 , 80 , 83 , 84 , 86 , 87 , 93 , 96 , 100 – 106 , 108 , 141 .

* References 72 , 81 , 82 , 85 , 94 , 114 , 118 , 129 , 136 .

* References 72 , 81 , 90 , 91 , 112 , 123 , 127 , 145 .

Study Site Homepage

  • Request new password
  • Create a new account

Doing Research in Counselling and Psychotherapy

Student resources, is it any good criteria for evaluating the quality of a research study.

When undertaking any research, it is important to be aware the criteria by which various stakeholders will evaluate it in terms of the credibility of its contribution to knowledge. The links and articles in this section build on the material in Chapter 6, in allowing you to access some key standard-setting sources.

Journal article reporting standards of the American Psychological Association: what APA journal editor and reviewers expect an article to look like (includes separate criteria for qualitative, quantitative and mixed methods studies).

CONSORT Guidelines for reporting clinical trials: criteria that have been accepted as defining standards for controlled quantitative outcome studies.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research.  Qualitative Inquiry ,  16 (10), 837–851.  

Highly influential set of criteria for assessing the credibility and practical utility of qualitative studies

Morse, J. M. (2015). Critical analysis of strategies for determining rigor in qualita tive inquiry.  Qualitative Health Research ,  25 (9), 1212–1222.

Review and discussion, by a leading figure in qualitative research, of the procedures undertaken by qualitative researchers to achieve credible and trustworthy findings

Birt, L., Scott, S., Cavers, D., Campbell, C., & Walter, F. (2016). Member checking: a tool to enhance trustworthiness or merely a nod to validation?  Qualitative Health Research , 26(13), 1802 – 1811.

Member checking is a crucial credibility procedure in qualitative research. This paper provides an authoritative discussion of how to use this technique effectively

Hannes, K., Heyvaert, M., Slegers, K., Vandenbrande, S., & Van Nuland, M. (2015). Exploring the potential for a consolidated standard for reporting guidelines for qualitative research: An argument Delphi approach.  International Journal of Qualitative Methods , 14(4), 1609406915611528.

Experienced qualitative researchers were asked for their views on credibility criteria

Truijens, F. L., Cornelis, S., Desmet, M., De Smet, M. M., & Meganck, R. (2019). Validity beyond measurement: Why psychometric validity is insufficient for valid psychotherapy research.  Frontiers in Psychology , 10, 532.

Discussion of the importance of criteria beyond those currently used by therapy researchers 

Research Evaluation

  • First Online: 23 June 2020

Cite this chapter

importance of criteria in evaluating a research paper

  • Carlo Ghezzi 2  

1005 Accesses

1 Citations

  • The original version of this chapter was revised. A correction to this chapter can be found at https://doi.org/10.1007/978-3-030-45157-8_7

This chapter is about research evaluation. Evaluation is quintessential to research. It is traditionally performed through qualitative expert judgement. The chapter presents the main evaluation activities in which researchers can be engaged. It also introduces the current efforts towards devising quantitative research evaluation based on bibliometric indicators and critically discusses their limitations, along with their possible (limited and careful) use.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Change history

19 october 2021.

The original version of the chapter was inadvertently published with an error. The chapter has now been corrected.

Notice that the taxonomy presented in Box 5.1 does not cover all kinds of scientific papers. As an example, it does not cover survey papers, which normally are not submitted to a conference.

Private institutions and industry may follow different schemes.

Adler, R., Ewing, J., Taylor, P.: Citation statistics: A report from the international mathematical union (imu) in cooperation with the international council of industrial and applied mathematics (iciam) and the institute of mathematical statistics (ims). Statistical Science 24 (1), 1–14 (2009). URL http://www.jstor.org/stable/20697661

Esposito, F., Ghezzi, C., Hermenegildo, M., Kirchner, H., Ong, L.: Informatics Research Evaluation. Informatics Europe (2018). URL https://www.informatics-europe.org/publications.html

Friedman, B., Schneider, F.B.: Incentivizing quality and impact: Evaluating scholarship in hiring, tenure, and promotion. Computing Research Association (2016). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., Rafols, I.: Bibliometrics: The leiden manifesto for research metrics. Nature News 520 (7548), 429 (2015). https://doi.org/10.1038/520429a . URL http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

Parnas, D.L.: Stop the numbers game. Commun. ACM 50 (11), 19–21 (2007). https://doi.org/10.1145/1297797.1297815 . URL http://doi.acm.org/10.1145/1297797.1297815

Patterson, D., Snyder, L., Ullman, J.: Evaluating computer scientists and engineers for promotion and tenure. Computing Research Association (1999). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Saenen, B., Borrell-Damian, L.: Reflections on University Research Assessment: key concepts, issues and actors. European University Association (2019). URL https://eua.eu/component/attachments/attachments.html?id=2144

Download references

Author information

Authors and affiliations.

Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy

Carlo Ghezzi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carlo Ghezzi .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Ghezzi, C. (2020). Research Evaluation. In: Being a Researcher. Springer, Cham. https://doi.org/10.1007/978-3-030-45157-8_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-45157-8_5

Published : 23 June 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-45156-1

Online ISBN : 978-3-030-45157-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. Criteria

    importance of criteria in evaluating a research paper

  2. What Is an Evaluation Essay? Simple Examples To Guide You

    importance of criteria in evaluating a research paper

  3. Research paper rubric generic

    importance of criteria in evaluating a research paper

  4. Rubric outlining conference paper assessment criteria.

    importance of criteria in evaluating a research paper

  5. (PDF) Criteria for Evaluating Research Papers

    importance of criteria in evaluating a research paper

  6. [PDF] Criteria and Recommendations for IS Research that Bridges the Academic-Practitioner Gap

    importance of criteria in evaluating a research paper

VIDEO

  1. Research Profile 1: Why is it so important?

  2. RESEARCH CRITIQUE Qualitative Research

  3. Evaluating Style/m.a.previous year eng lit paper 1/Meaning of evaluating style and its Criteria

  4. Criteria of Evaluating a Tax System

  5. IMPORTANCE OF LITERATURE REVIEW WRITING IN RESEARCH ARTICLE

  6. 3.9 Criteria For Research Quality

COMMENTS

  1. Criteria for Good Qualitative Research: A Comprehensive ...

    This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions).

  2. Evaluating Research in Academic Journals: A Practical Guide ...

    Evaluating Research in Academic Journals is a guide for students who are learning how to evaluate reports of empirical research published in academic journals.

  3. Evaluating research: A multidisciplinary approach to ...

    When working towards the goal of trying to find a general model of what research is, it was considered important to define different criteria (or concepts) that can describe the phenomenon of ‘research practice’.

  4. Evaluating Research – Process, Examples and Methods

    Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness.

  5. Evaluative Criteria for Qualitative Research in Health Care ...

    We examined evaluative criteria across publications, listing similar criteria in thematic clusters (eg, importance of research, conducting ethically sound research), identifying the central principle or theme of the cluster, and reviewing and refining clusters.

  6. Validity in Qualitative Evaluation: Linking Purposes ...

    Given the nature of the evaluator–stakeholder relationship in evaluations (see Rossi, Lipsey, & Freeman, 2004 ), and the methodological properties of qualitative research in particular, qualitative information in evaluation can have three different purposes.

  7. Is it Any Good? Criteria for Evaluating the Quality of a ...

    Criteria for Evaluating the Quality of a Research Study. When undertaking any research, it is important to be aware the criteria by which various stakeholders will evaluate it in terms of the credibility of its contribution to knowledge.

  8. Research Evaluation | SpringerLink

    Evaluation is an essential aspect of research. It is ubiquitous and continuous over time for researchers. Its main goal is to ensure rigor and quality through objective assessment at all levels. It is the fundamental mechanism that regulates the highly critical and competitive research processes.

  9. Criteria for Evaluating Qualitative Research

    • What criteria are appropriate for evaluating the quality of qualitative research? In addressing these questions we focus on qualitative research; the field of mixed methods research is outside our scope.

  10. Understanding and Evaluating Research: A Critical Guide

    INTRODUCTION. Chapter 1: Critical Research Literacy. PHILOSOPHICAL AND THEORETICAL ASPECTS OF RESEARCH. Chapter 2: Research Methodologies. Chapter 3: Conceptual Frameworks, Theories, and Models. ORIENTING AND SUPPORTIVE ELEMENTS OF RESEARCH. Chapter 4: Orienting and Supportive Elements of a Journal Article. Chapter 5: Peer-Reviewed Journals.