• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

Design flaws to avoid.

  • Purpose of Guide
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The research design establishes the decision-making processes, conceptual structure of investigation, and methods of analysis used to address the study's research problem. Taking the time to develop a thorough research design helps to organize your thoughts, sets the boundaries of your study, maximizes the reliability of your findings, and avoids misleading or incomplete conclusions. Therefore, if any aspect of your research design is flawed or under-developed, the quality and reliability of your final results, as well as the overall value of your study, will be diminished.

In no particular order, here are some common problems to avoid when designing a research study. Some are general issues you should think about as your organize your thoughts [e.g., developing a good research problem] while other issues must be explicitly addressed in your paper [e.g., describe study limitations].

  • Lack of Specificity -- do not describe the investigative aspects of your study in overly-broad generalities. Avoid using vague qualifiers, such as, extremely, very, entirely, completely, etc. It's important that you design a study that describes the process of investigation in clear and concise terms. Otherwise, the reader cannot be certain about what you intend to do.
  • Poorly Defined Research Problem -- the starting point of most new research in the social and behavioral sciences is to formulate a problem problem and begin the process of developing questions that address the problem. Your paper should outline and explicitly delimit the problem and state what you intend to investigate because this will determine what research design you will use [identifying the research problem always precedes choice of design].
  • Lack of Theoretical Framework -- the theoretical framework represents the conceptual foundation of your study. Therefore, your research design should include an explicit set of logically derived hypotheses, basic postulates, or assumptions that can be tested in relation to the research problem. More information about developing a theoretical framework can be found here .
  • Significance -- this refers to describing what value your study has in contributing to understanding the research problem. In the social and behavioral sciences, arguing why a study is significant is framed in the context of clearly answering the "So What?" question [e.g., "This study compares key areas of economic relations among three Central American countries." So what?]. In describing the research design, state why your study is important and how it contributes to the larger body of studies about the topic being investigated.
  • Relationship between Past Research and Your Paper -- do not simply offer a summary description of prior research. Your literature review should include an explicit statement linking the results of prior research to the research you are about to undertake. This can be done, for example, by identifying basic weaknesses in previous studies, filling specific gaps in knowledge, or describing how your study contributes a unique or different perspective or approach to the problem.
  • Provincialism -- this refers to designing a narrowly applied scope, geographical area, sampling, or method of analysis that restricts your ability to create meaningful outcomes and, by extension, obtaining results that are relevant and possibly transferable to understanding phenomena in other settings. The scope of your research should be clearly defined, but not to the point of where you cannot extrapolate findings in a meaningful way applied to better understanding the research problem.
  • Objectives, Hypotheses, or Questions -- your research design should include one or more questions or hypotheses that you are attempting to answer about the research problem. These should be clearly articulated and closely tied to the overall aims of your paper. Although there is no rule regarding the number of questions or hypotheses associated with a research problem, most studies in the social and behavioral sciences address between two and five key questions.
  • Poor Methodological Approach -- the design must include a well-developed and transparent plan for how you intend to collect or generate data and how it will be analyzed. Ensure that the method used to gather information for analysis is aligned with the topic of inquiry and the underlying research questions to be addressed.
  • Proximity Sampling -- this refers to using a sample that is based not on the purpose of your study, but rather, is based on the proximity of a particular group of subjects. The units of analysis, whether they be persons, places, events, or things, should not be based solely on ease of access and convenience. Note that this does not mean you should not use units of analysis that are easy to access. The point is that this closeness to data or information cannot be the sole factor that determines the purpose of your study.
  • Techniques or Instruments -- be clear in describing the techniques [e.g., semi-structured interviews; Linear Regression Analysis] or instruments [e.g., questionnaire; online survey] used to gather data. Your research design should note how the technique or instrument will provide reasonably reliable data to answer the questions associated with the research problem.
  • Statistical Treatment -- in quantitative studies, you must give a complete description of how you will organize the raw data for analysis. In most cases, this involves describing the data through the measures of central tendencies like mean, median, and mode that help the researcher explain how the data are concentrated and, thus, lead to meaningful interpretations of key trends or patterns found within that data.
  • Vocabulary -- research often contains jargon and specialized language that the reader is presumably familiar with. However, avoid overuse of technical or pseudo-technical terminology as part of describing your research design. Problems with vocabulary also can refer to the use of popular terms, cliche's, or culture-specific language that is inappropriate for academic writing. More information about academic writing can be found here .
  • Ethical Dilemmas -- in the methods section of qualitative research studies, your design must document how you intend to minimize risk for participants [a.k.a., "respondents", "human subjects"] during stages of data gathering while, at the same time, still being able to adequately address the research problem. Failure to do so can lead the reader to question the validity and objectivity of your entire study.
  • Limitations of Study -- all studies have limitations. Your research design should anticipate and explain the reasons why these limitations exist and clearly describe the extent of missing data. It is important to include a statement concerning what impact these limitations may have on the validity of your results and how you helped to ameliorate the significance of these limitations. For more details about study limitations, go here .

Butin, Dan W. The Education Dissertation A Guide for Practitioner Scholars . Thousand Oaks, CA: Corwin, 2010; Carter, Susan. Structuring Your Research Thesis . New York: Palgrave Macmillan, 2012; Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018;Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008. Lunsford, Andrea and Robert Connors. The St. Martin’s Handbook . 3rd ed. New York: St. Martin’s Press, 1995.

  • << Previous: Types of Research Designs
  • Next: Independent and Dependent Variables >>
  • Last Updated: May 30, 2024 9:38 AM
  • URL: https://libguides.usc.edu/writingguide

How to Write Limitations of the Study (with examples)

This blog emphasizes the importance of recognizing and effectively writing about limitations in research. It discusses the types of limitations, their significance, and provides guidelines for writing about them, highlighting their role in advancing scholarly research.

Updated on August 24, 2023

a group of researchers writing their limitation of their study

No matter how well thought out, every research endeavor encounters challenges. There is simply no way to predict all possible variances throughout the process.

These uncharted boundaries and abrupt constraints are known as limitations in research . Identifying and acknowledging limitations is crucial for conducting rigorous studies. Limitations provide context and shed light on gaps in the prevailing inquiry and literature.

This article explores the importance of recognizing limitations and discusses how to write them effectively. By interpreting limitations in research and considering prevalent examples, we aim to reframe the perception from shameful mistakes to respectable revelations.

What are limitations in research?

In the clearest terms, research limitations are the practical or theoretical shortcomings of a study that are often outside of the researcher’s control . While these weaknesses limit the generalizability of a study’s conclusions, they also present a foundation for future research.

Sometimes limitations arise from tangible circumstances like time and funding constraints, or equipment and participant availability. Other times the rationale is more obscure and buried within the research design. Common types of limitations and their ramifications include:

  • Theoretical: limits the scope, depth, or applicability of a study.
  • Methodological: limits the quality, quantity, or diversity of the data.
  • Empirical: limits the representativeness, validity, or reliability of the data.
  • Analytical: limits the accuracy, completeness, or significance of the findings.
  • Ethical: limits the access, consent, or confidentiality of the data.

Regardless of how, when, or why they arise, limitations are a natural part of the research process and should never be ignored . Like all other aspects, they are vital in their own purpose.

Why is identifying limitations important?

Whether to seek acceptance or avoid struggle, humans often instinctively hide flaws and mistakes. Merging this thought process into research by attempting to hide limitations, however, is a bad idea. It has the potential to negate the validity of outcomes and damage the reputation of scholars.

By identifying and addressing limitations throughout a project, researchers strengthen their arguments and curtail the chance of peer censure based on overlooked mistakes. Pointing out these flaws shows an understanding of variable limits and a scrupulous research process.

Showing awareness of and taking responsibility for a project’s boundaries and challenges validates the integrity and transparency of a researcher. It further demonstrates the researchers understand the applicable literature and have thoroughly evaluated their chosen research methods.

Presenting limitations also benefits the readers by providing context for research findings. It guides them to interpret the project’s conclusions only within the scope of very specific conditions. By allowing for an appropriate generalization of the findings that is accurately confined by research boundaries and is not too broad, limitations boost a study’s credibility .

Limitations are true assets to the research process. They highlight opportunities for future research. When researchers identify the limitations of their particular approach to a study question, they enable precise transferability and improve chances for reproducibility. 

Simply stating a project’s limitations is not adequate for spurring further research, though. To spark the interest of other researchers, these acknowledgements must come with thorough explanations regarding how the limitations affected the current study and how they can potentially be overcome with amended methods.

How to write limitations

Typically, the information about a study’s limitations is situated either at the beginning of the discussion section to provide context for readers or at the conclusion of the discussion section to acknowledge the need for further research. However, it varies depending upon the target journal or publication guidelines. 

Don’t hide your limitations

It is also important to not bury a limitation in the body of the paper unless it has a unique connection to a topic in that section. If so, it needs to be reiterated with the other limitations or at the conclusion of the discussion section. Wherever it is included in the manuscript, ensure that the limitations section is prominently positioned and clearly introduced.

While maintaining transparency by disclosing limitations means taking a comprehensive approach, it is not necessary to discuss everything that could have potentially gone wrong during the research study. If there is no commitment to investigation in the introduction, it is unnecessary to consider the issue a limitation to the research. Wholly consider the term ‘limitations’ and ask, “Did it significantly change or limit the possible outcomes?” Then, qualify the occurrence as either a limitation to include in the current manuscript or as an idea to note for other projects. 

Writing limitations

Once the limitations are concretely identified and it is decided where they will be included in the paper, researchers are ready for the writing task. Including only what is pertinent, keeping explanations detailed but concise, and employing the following guidelines is key for crafting valuable limitations:

1) Identify and describe the limitations : Clearly introduce the limitation by classifying its form and specifying its origin. For example:

  • An unintentional bias encountered during data collection
  • An intentional use of unplanned post-hoc data analysis

2) Explain the implications : Describe how the limitation potentially influences the study’s findings and how the validity and generalizability are subsequently impacted. Provide examples and evidence to support claims of the limitations’ effects without making excuses or exaggerating their impact. Overall, be transparent and objective in presenting the limitations, without undermining the significance of the research. 

3) Provide alternative approaches for future studies : Offer specific suggestions for potential improvements or avenues for further investigation. Demonstrate a proactive approach by encouraging future research that addresses the identified gaps and, therefore, expands the knowledge base.

Whether presenting limitations as an individual section within the manuscript or as a subtopic in the discussion area, authors should use clear headings and straightforward language to facilitate readability. There is no need to complicate limitations with jargon, computations, or complex datasets.

Examples of common limitations

Limitations are generally grouped into two categories , methodology and research process .

Methodology limitations

Methodology may include limitations due to:

  • Sample size
  • Lack of available or reliable data
  • Lack of prior research studies on the topic
  • Measure used to collect the data
  • Self-reported data

methodology limitation example

The researcher is addressing how the large sample size requires a reassessment of the measures used to collect and analyze the data.

Research process limitations

Limitations during the research process may arise from:

  • Access to information
  • Longitudinal effects
  • Cultural and other biases
  • Language fluency
  • Time constraints

research process limitations example

The author is pointing out that the model’s estimates are based on potentially biased observational studies.

Final thoughts

Successfully proving theories and touting great achievements are only two very narrow goals of scholarly research. The true passion and greatest efforts of researchers comes more in the form of confronting assumptions and exploring the obscure.

In many ways, recognizing and sharing the limitations of a research study both allows for and encourages this type of discovery that continuously pushes research forward. By using limitations to provide a transparent account of the project's boundaries and to contextualize the findings, researchers pave the way for even more robust and impactful research in the future.

Charla Viera, MS

See our "Privacy Policy"

Ensure your structure and ideas are consistent and clearly communicated

Pair your Premium Editing with our add-on service Presubmission Review for an overall assessment of your manuscript.

How to Understand Flaws in Clinical Research

  • First Online: 01 January 2020

Cite this chapter

the researcher should never report flaws

  • Teddy D. Warner 2  

1175 Accesses

Health researchers, providers, consumers, and policy makers are confronted with unmanageable amounts of information. Being able to recognize and understand flaws that commonly arise in clinical research is an essential skill for academic faculty in clinical departments. There are a number of important issues or problems that seriously limit one’s ability to trust the published outcomes in clinical research (Table 1) as authoritative.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299(2):211–3.

Article   CAS   Google Scholar  

Sechrest L. Validity of measures is no simple matter. Health Res Edu Trust. 2005;40(5).Part II):1584–604.

Google Scholar  

Ziliak ST, McCloskey DN. The cult of statistical significance: how the standard error costs us jobs, justice, and lives. Ann Arbor: University of Michigan Press; 2008.

Cumming G. The new statistics: why and how. Psychol Sci. 2013;25(1):7–29. https://doi.org/10.1177/0956797613504966 .

Article   PubMed   Google Scholar  

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Goetzsche PC, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Article   Google Scholar  

Moher D, Liberati A, Tetzlaff J, Altman DG. The PRISMA Group preferred reporting items for systematic reviews and meta-analyses. PLoS Med. 2009;6(7):e1000097.

Ioannidis JP. Why most discovered published research findings are false. PLoS Med. 2005;2:e124.

Ioannidis JP. Why most discovered true associations are inflated. Epidemiology. 2008;19:640–8.

Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349:aac4716.

Suggested Reading

Gelbach SH. Interpreting the medical literature. 5th ed. New York: McGraw-Hill; 2006.

Guyatt G, Rennie D, Meade M, Cook D. Users’ guides to the medical literature: a manual for evidence-based clinical practice. 3rd ed. New York: McGraw-Hill; 2014.

Piantadosi S. Clinical trials: a methodologic perspective. 2nd ed. New York: Wiley; 2005.

Book   Google Scholar  

Stone J. Conducting clinical research: a practical guide for physicians, nurses, study coordinators, and investigators. Cumberland: Mountainside Maryland Press; 2010.

Wang D, Bukai A. Clinical trials – a practical guide to design, analysis, and reporting. London: Remedica Publishing; 2006.

https://clinicaltrials.gov/ . A web-based resource that providing the public with easy access to information on publicly and privately supported clinical studies on a wide range of diseases and conditions and maintained by the National Library of Medicine and the National Institutes of Health.

Download references

Author information

Authors and affiliations.

Department of Family and Community Medicine, University of New Mexico, Albuquerque, NM, USA

Teddy D. Warner

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Teddy D. Warner .

Editor information

Editors and affiliations.

Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA

Laura Weiss Roberts

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Warner, T.D. (2020). How to Understand Flaws in Clinical Research. In: Roberts, L. (eds) Roberts Academic Medicine Handbook. Springer, Cham. https://doi.org/10.1007/978-3-030-31957-1_36

Download citation

DOI : https://doi.org/10.1007/978-3-030-31957-1_36

Published : 01 January 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-31956-4

Online ISBN : 978-3-030-31957-1

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Analysis and Synthesis

Gaps, flaws, and limitations.

All primary research will contain gaps (unexplored ideas), flaws (problems with study design), and limitations (factors that constrain the applicability of study findings). In fact, most academic articles will divulge these limitations when discussing their study design or results. These gaps, flaws, and limitations are what scholars look for when reading others’ academic work; they enable scholars to continue the academic conversation by addressing these gaps, flaws, and limitations through their own primary research. In other words, where one scholar may stop, another scholar will pick up and design research to fill that gap, correct that flaw, or address that limitation to further the conversation on the topic.

Being able to recognize gaps, flaws, and limitations in primary research is important to your research this semester. Identifying these gaps, flaws, and limitations will enable you to add a new and relevant idea to the current academic conversation on your topic; it will also boost your ethos by demonstrating to your audience that you are aware of the current conversation on the topic.

research toolbox

Locating gaps, flaws, and limitations will be easier once you have learned to read, analyze, and understand academic journal articles. For this research toolbox, you will ask yourself the following questions when reviewing one of your academic article’s primary research methods and results:

  • What was the aim or goal of the study?
  • How was the study designed to achieve this aim or goal?
  • What was the sample size in the study? Was it large enough to yield credible data?
  • Who were the participants in the study? Did it include a variety of participants, or did it focus on one age group, ethnicity, or gender?
  • Did the study attempt to control for any variables that might affect the validity of the results? How?
  • Did the study include a control group?
  • How long did the study take to complete?
  • How recently was the article published?
  • Does the article review a considerable number of secondary sources on the topic?

Footer Logo Lumen Candela

Privacy Policy

6 Common Flaws To Look Out For in Peer Review

the researcher should never report flaws

Joanna Wilkinson

It’s not always easy to spot flaws in research papers. Sometimes an error is glaringly obvious – like a vague abstract with no aim and little data – and other times it’s like finding a needle in a stack of pins. (Like, really, really sharp pins that leave you dreaming of haystacks). Luckily, the solution isn’t all that prickly. The trick is knowing what to look for during peer review, where to find it and, importantly, how severe the error is.

To help with this, we’ve pulled together a list of six common flaws you can watch out for as a reviewer.

Here are the 6 common flaws to look out for in peer review:

1) Inappropriate study design for the study aims 2) Unexplained deviations from standard/best practice and methodologies 3) Over-interpretation of results 4) Commenting beyond the scope of the article 5) Lack of evidence to support conclusions 6) Too many words

This blog post is informed by a module within our free peer review training course, the  Publons Academy . Join today to learn the core competencies of peer review. With one-to-one support from your mentor, you’ll write reviews of real papers, gain access to our Review Template and review examples, and by the end of it, we’ll put you in front of journal editors in your field as a certified peer reviewer.

We discuss each common flaw to watch out for below, but first, a quick recap about the importance of peer review and how it can improve your own research.

Why peer review is important

As a peer reviewer, you play a vital role in protecting the quality and integrity of scientific research. Your peers rely on this work to understand what research to trust and build on, leading to better, faster science.

Your manuscripts will also improve because, over time, you’ll learn how to use your knowledge in peer review to fine-tune your own research.

Peer reviewing will help you evaluate the importance and accuracy of your research question; the appropriateness of methodological and statistical approaches; and build up a set of best-practice tips to prepare and organize your research project. And finally, by learning common errors to watch out for when peer reviewing, you’ll inevitably learn to avoid the same mistakes in your own research, which will increase your chances of getting published.

6 Common Research Flaws and How to Spot them in a Manuscript

A quick caveat:  this isn’t an extensive list! Research errors differ for every field as do the types of studies conducted. This is a helpful starting point, however, that will enable you to guard against the more common mistakes made in a manuscript. Once you start accepting more invites to review and become more confident reading a manuscript critically, you can build on this list with more specialized examples.

1. Inappropriate study design for the study aims

A study’s design is crucial to obtaining valid and scientifically sound results. Familiarise yourself with those commonly used in your field of research. If you come across an uncommon study design, read the researchers’ use and justification of it carefully, and question how it might affect their data and analysis. Review the study design critically but also remember to be open-minded. Just because something is new and unfamiliar it does not automatically mean it is incorrect or flawed.

2. Unexplained deviations from standard/best practice and methodologies

Similar to the above. The methods section, for example, should explain the steps taken to produce the results. If these are not clear or you’re left questioning their validity, it’s important to make your concerns known. And if they are unusual then, as with the study design, examine the researchers’ justification carefully with the view to ask more questions if necessary. Non-academic discourse, whereby opinionated and biased statements are used throughout the study, is another deviation from best practice

3. Over-interpretation of results

Over-interpretation has no place in research. Ensure the conclusions drawn in the paper are based on the data presented and are not extrapolated beyond that (to a larger population or ecological setting, for example). You should also watch out for studies that focus on seemingly important differences where none exist.

4. Commenting beyond the scope of the article

“That’s beyond the scope of this paper” is a common phrase in academic writing. As a reviewer, watch out for papers that include comments or statements not pertaining to the research project and data at hand.

5. Lack of evidence to support conclusions

A research paper’s concluding statements must be justified and evidence-based. If you’re not convinced of the results, it could mean the researchers need to clarify aspects of their methodological procedure, add more references to support their claims, or include additional data or further analysis.

6. Too many words

A common pain point in manuscripts is that it’s too wordy. It’s important to keep check on this scenario and encourage clear, concise and effective text where possible. Too many words can be distracting for the reader, which at best could cause them to lose interest and at worst could lead to them misinterpreting the research.

If you’re interested in learning more about common flaws and how to address them in peer review, sign up for our  Publons Academy . You’ll gain practical experience with this free, on-demand course by writing real reviews with one-to-one support for your mentor. After completion, you’ll also be put in front of editors at elite journals in your field.

Want to learn more? See our  12 step guide  about reviewing a manuscript critically.

Publons  allows you to record, verify, and showcase your peer review contributions in a format you can include in job and funding applications (without breaking reviewer anonymity).

Register now  to start building your verified peer review record.

Related posts

Journal citation reports 2024 preview: unified rankings for more inclusive journal assessment.

the researcher should never report flaws

Introducing the Clarivate Academic AI Platform

the researcher should never report flaws

Reimagining research impact: Introducing Web of Science Research Intelligence

the researcher should never report flaws

Ethical Considerations In Psychology Research

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants. This means that they must abide by certain moral principles and rules of conduct.

What are Ethical Guidelines?

In Britain, ethical guidelines for research are published by the British Psychological Society, and in America, by the American Psychological Association. The purpose of these codes of conduct is to protect research participants, the reputation of psychology, and psychologists themselves.

Moral issues rarely yield a simple, unambiguous, right or wrong answer. It is, therefore, often a matter of judgment whether the research is justified or not.

For example, it might be that a study causes psychological or physical discomfort to participants; maybe they suffer pain or perhaps even come to serious harm.

On the other hand, the investigation could lead to discoveries that benefit the participants themselves or even have the potential to increase the sum of human happiness.

Rosenthal and Rosnow (1984) also discuss the potential costs of failing to carry out certain research. Who is to weigh up these costs and benefits? Who is to judge whether the ends justify the means?

Finally, if you are ever in doubt as to whether research is ethical or not, it is worthwhile remembering that if there is a conflict of interest between the participants and the researcher, it is the interests of the subjects that should take priority.

Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented. All UK research requires ethical approval by one or more of the following:

  • Department Ethics Committee (DEC) : for most routine research.
  • Institutional Ethics Committee (IEC) : for non-routine research.
  • External Ethics Committee (EEC) : for research that s externally regulated (e.g., NHS research).

Committees review proposals to assess if the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These committees may request researchers make changes to the study’s design or procedure or, in extreme cases, deny approval of the study altogether.

The British Psychological Society (BPS) and American Psychological Association (APA) have issued a code of ethics in psychology that provides guidelines for conducting research.  Some of the more important ethical issues are as follows:

Informed Consent

Before the study begins, the researcher must outline to the participants what the research is about and then ask for their consent (i.e., permission) to participate.

An adult (18 years +) capable of being permitted to participate in a study can provide consent. Parents/legal guardians of minors can also provide consent to allow their children to participate in a study.

Whenever possible, investigators should obtain the consent of participants. In practice, this means it is not sufficient to get potential participants to say “Yes.”

They also need to know what it is that they agree to. In other words, the psychologist should, so far as is practicable, explain what is involved in advance and obtain the informed consent of participants.

Informed consent must be informed, voluntary, and rational. Participants must be given relevant details to make an informed decision, including the purpose, procedures, risks, and benefits. Consent must be given voluntarily without undue coercion. And participants must have the capacity to rationally weigh the decision.

Components of informed consent include clearly explaining the risks and expected benefits, addressing potential therapeutic misconceptions about experimental treatments, allowing participants to ask questions, and describing methods to minimize risks like emotional distress.

Investigators should tailor the consent language and process appropriately for the study population. Obtaining meaningful informed consent is an ethical imperative for human subjects research.

The voluntary nature of participation should not be compromised through coercion or undue influence. Inducements should be fair and not excessive/inappropriate.

However, it is not always possible to gain informed consent.  Where the researcher can’t ask the actual participants, a similar group of people can be asked how they would feel about participating.

If they think it would be OK, then it can be assumed that the real participants will also find it acceptable. This is known as presumptive consent.

However, a problem with this method is that there might be a mismatch between how people think they would feel/behave and how they actually feel and behave during a study.

In order for consent to be ‘informed,’ consent forms may need to be accompanied by an information sheet for participants’ setting out information about the proposed study (in lay terms), along with details about the investigators and how they can be contacted.

Special considerations exist when obtaining consent from vulnerable populations with decisional impairments, such as psychiatric patients, intellectually disabled persons, and children/adolescents. Capacity can vary widely so should be assessed individually, but interventions to improve comprehension may help. Legally authorized representatives usually must provide consent for children.

Participants must be given information relating to the following:

  • A statement that participation is voluntary and that refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive.
  • Purpose of the research.
  • All foreseeable risks and discomforts to the participant (if there are any). These include not only physical injury but also possible psychological.
  • Procedures involved in the research.
  • Benefits of the research to society and possibly to the individual human subject.
  • Length of time the subject is expected to participate.
  • Person to contact for answers to questions or in the event of injury or emergency.
  • Subjects” right to confidentiality and the right to withdraw from the study at any time without any consequences.
Debriefing after a study involves informing participants about the purpose, providing an opportunity to ask questions, and addressing any harm from participation. Debriefing serves an educational function and allows researchers to correct misconceptions. It is an ethical imperative.

After the research is over, the participant should be able to discuss the procedure and the findings with the psychologist. They must be given a general idea of what the researcher was investigating and why, and their part in the research should be explained.

Participants must be told if they have been deceived and given reasons why. They must be asked if they have any questions, which should be answered honestly and as fully as possible.

Debriefing should occur as soon as possible and be as full as possible; experimenters should take reasonable steps to ensure that participants understand debriefing.

“The purpose of debriefing is to remove any misconceptions and anxieties that the participants have about the research and to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Harris, 1998).

The debriefing aims to provide information and help the participant leave the experimental situation in a similar frame of mind as when he/she entered it (Aronson, 1988).

Exceptions may exist if debriefing seriously compromises study validity or causes harm itself, like negative emotions in children. Consultation with an institutional review board guides exceptions.

Debriefing indicates investigators’ commitment to participant welfare. Harms may not be raised in the debriefing itself, so responsibility continues after data collection. Following up demonstrates respect and protects persons in human subjects research.

Protection of Participants

Researchers must ensure that those participating in research will not be caused distress. They must be protected from physical and mental harm. This means you must not embarrass, frighten, offend or harm participants.

Normally, the risk of harm must be no greater than in ordinary life, i.e., participants should not be exposed to risks greater than or additional to those encountered in their normal lifestyles.

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, ensure their participation is brief as they get tired easily and have a limited attention span.

Researchers are not always accurately able to predict the risks of taking part in a study, and in some cases, a therapeutic debriefing may be necessary if participants have become disturbed during the research (as happened to some participants in Zimbardo’s prisoners/guards study ).

Deception research involves purposely misleading participants or withholding information that could influence their participation decision. This method is controversial because it limits informed consent and autonomy, but can provide otherwise unobtainable valuable knowledge.

Types of deception include (i) deliberate misleading, e.g. using confederates, staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g., failure to disclose full information about the study, or creating ambiguity.

The researcher should avoid deceiving participants about the nature of the research unless there is no alternative – and even then, this would need to be judged acceptable by an independent expert. However, some types of research cannot be carried out without at least some element of deception.

For example, in Milgram’s study of obedience , the participants thought they were giving electric shocks to a learner when they answered a question wrongly. In reality, no shocks were given, and the learners were confederates of Milgram.

This is sometimes necessary to avoid demand characteristics (i.e., the clues in an experiment that lead participants to think they know what the researcher is looking for).

Another common example is when a stooge or confederate of the experimenter is used (this was the case in both the experiments carried out by Asch ).

According to ethics codes, deception must have strong scientific justification, and non-deceptive alternatives should not be feasible. Deception that causes significant harm is prohibited. Investigators should carefully weigh whether deception is necessary and ethical for their research.

However, participants must be deceived as little as possible, and any deception must not cause distress.  Researchers can determine whether participants are likely distressed when deception is disclosed by consulting culturally relevant groups.

Participants should immediately be informed of the deception without compromising the study’s integrity. Reactions to learning of deception can range from understanding to anger. Debriefing should explain the scientific rationale and social benefits to minimize negative reactions.

If the participant is likely to object or be distressed once they discover the true nature of the research at debriefing, then the study is unacceptable.

If you have gained participants’ informed consent by deception, then they will have agreed to take part without actually knowing what they were consenting to.  The true nature of the research should be revealed at the earliest possible opportunity or at least during debriefing.

Some researchers argue that deception can never be justified and object to this practice as it (i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which to build a discipline; and (iii) leads to distrust of psychology in the community.

Confidentiality

Protecting participant confidentiality is an ethical imperative that demonstrates respect, ensures honest participation, and prevents harms like embarrassment or legal issues. Methods like data encryption, coding systems, and secure storage should match the research methodology.

Participants and the data gained from them must be kept anonymous unless they give their full consent.  No names must be used in a lab report .

Researchers must clearly describe to participants the limits of confidentiality and methods to protect privacy. With internet research, threats exist like third-party data access; security measures like encryption should be explained. For non-internet research, other protections should be noted too, like coding systems and restricted data access.

High-profile data breaches have eroded public trust. Methods that minimize identifiable information can further guard confidentiality. For example, researchers can consider whether birthdates are necessary or just ages.

Generally, reducing personal details collected and limiting accessibility safeguards participants. Following strong confidentiality protections demonstrates respect for persons in human subjects research.

What do we do if we discover something that should be disclosed (e.g., a criminal act)? Researchers have no legal obligation to disclose criminal acts and must determine the most important consideration: their duty to the participant vs. their duty to the wider community.

Ultimately, decisions to disclose information must be set in the context of the research aims.

Withdrawal from an Investigation

Participants should be able to leave a study anytime if they feel uncomfortable. They should also be allowed to withdraw their data. They should be told at the start of the study that they have the right to withdraw.

They should not have pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram’s research).

Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many participants are paid or receive course credits; they may worry they won’t get this if they withdraw.

Even at the end of the study, the participant has a final opportunity to withdraw the data they have provided for the research.

Ethical Issues in Psychology & Socially Sensitive Research

There has been an assumption over the years by many psychologists that provided they follow the BPS or APA guidelines when using human participants and that all leave in a similar state of mind to how they turned up, not having been deceived or humiliated, given a debrief, and not having had their confidentiality breached, that there are no ethical concerns with their research.

But consider the following examples:

a) Caughy et al. 1994 found that middle-class children in daycare at an early age generally score less on cognitive tests than children from similar families reared in the home.

Assuming all guidelines were followed, neither the parents nor the children participating would have been unduly affected by this research. Nobody would have been deceived, consent would have been obtained, and no harm would have been caused.

However, consider the wider implications of this study when the results are published, particularly for parents of middle-class infants who are considering placing their young children in daycare or those who recently have!

b)  IQ tests administered to black Americans show that they typically score 15 points below the average white score.

When black Americans are given these tests, they presumably complete them willingly and are not harmed as individuals. However, when published, findings of this sort seek to reinforce racial stereotypes and are used to discriminate against the black population in the job market, etc.

Sieber & Stanley (1988) (the main names for Socially Sensitive Research (SSR) outline 4 groups that may be affected by psychological research: It is the first group of people that we are most concerned with!
  • Members of the social group being studied, such as racial or ethnic group. For example, early research on IQ was used to discriminate against US Blacks.
  • Friends and relatives of those participating in the study, particularly in case studies, where individuals may become famous or infamous. Cases that spring to mind would include Genie’s mother.
  • The research team. There are examples of researchers being intimidated because of the line of research they are in.
  • The institution in which the research is conducted.
salso suggest there are 4 main ethical concerns when conducting SSR:
  • The research question or hypothesis.
  • The treatment of individual participants.
  • The institutional context.
  • How the findings of the research are interpreted and applied.

Ethical Guidelines For Carrying Out SSR

Sieber and Stanley suggest the following ethical guidelines for carrying out SSR. There is some overlap between these and research on human participants in general.

Privacy : This refers to people rather than data. Asking people questions of a personal nature (e.g., about sexuality) could offend.

Confidentiality: This refers to data. Information (e.g., about H.I.V. status) leaked to others may affect the participant’s life.

Sound & valid methodology : This is even more vital when the research topic is socially sensitive. Academics can detect flaws in methods, but the lay public and the media often don’t.

When research findings are publicized, people are likely to consider them fact, and policies may be based on them. Examples are Bowlby’s maternal deprivation studies and intelligence testing.

Deception : Causing the wider public to believe something, which isn’t true by the findings, you report (e.g., that parents are responsible for how their children turn out).

Informed consent : Participants should be made aware of how participating in the research may affect them.

Justice & equitable treatment : Examples of unjust treatment are (i) publicizing an idea, which creates a prejudice against a group, & (ii) withholding a treatment, which you believe is beneficial, from some participants so that you can use them as controls.

Scientific freedom : Science should not be censored, but there should be some monitoring of sensitive research. The researcher should weigh their responsibilities against their rights to do the research.

Ownership of data : When research findings could be used to make social policies, which affect people’s lives, should they be publicly accessible? Sometimes, a party commissions research with their interests in mind (e.g., an industry, an advertising agency, a political party, or the military).

Some people argue that scientists should be compelled to disclose their results so that other scientists can re-analyze them. If this had happened in Burt’s day, there might not have been such widespread belief in the genetic transmission of intelligence. George Miller (Miller’s Magic 7) famously argued that we should give psychology away.

The values of social scientists : Psychologists can be divided into two main groups: those who advocate a humanistic approach (individuals are important and worthy of study, quality of life is important, intuition is useful) and those advocating a scientific approach (rigorous methodology, objective data).

The researcher’s values may conflict with those of the participant/institution. For example, if someone with a scientific approach was evaluating a counseling technique based on a humanistic approach, they would judge it on criteria that those giving & receiving the therapy may not consider important.

Cost/benefit analysis : It is unethical if the costs outweigh the potential/actual benefits. However, it isn’t easy to assess costs & benefits accurately & the participants themselves rarely benefit from research.

Sieber & Stanley advise that researchers should not avoid researching socially sensitive issues. Scientists have a responsibility to society to find useful knowledge.

  • They need to take more care over consent, debriefing, etc. when the issue is sensitive.
  • They should be aware of how their findings may be interpreted & used by others.
  • They should make explicit the assumptions underlying their research so that the public can consider whether they agree with these.
  • They should make the limitations of their research explicit (e.g., ‘the study was only carried out on white middle-class American male students,’ ‘the study is based on questionnaire data, which may be inaccurate,’ etc.
  • They should be careful how they communicate with the media and policymakers.
  • They should be aware of the balance between their obligations to participants and those to society (e.g. if the participant tells them something which they feel they should tell the police/social services).
  • They should be aware of their own values and biases and those of the participants.

Arguments for SSR

  • Psychologists have devised methods to resolve the issues raised.
  • SSR is the most scrutinized research in psychology. Ethical committees reject more SSR than any other form of research.
  • By gaining a better understanding of issues such as gender, race, and sexuality, we are able to gain greater acceptance and reduce prejudice.
  • SSR has been of benefit to society, for example, EWT. This has made us aware that EWT can be flawed and should not be used without corroboration. It has also made us aware that the EWT of children is every bit as reliable as that of adults.
  • Most research is still on white middle-class Americans (about 90% of research is quoted in texts!). SSR is helping to redress the balance and make us more aware of other cultures and outlooks.

Arguments against SSR

  • Flawed research has been used to dictate social policy and put certain groups at a disadvantage.
  • Research has been used to discriminate against groups in society, such as the sterilization of people in the USA between 1910 and 1920 because they were of low intelligence, criminal, or suffered from psychological illness.
  • The guidelines used by psychologists to control SSR lack power and, as a result, are unable to prevent indefensible research from being carried out.

American Psychological Association. (2002). American Psychological Association ethical principles of psychologists and code of conduct. www.apa.org/ethics/code2002.html

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Caughy, M. O. B., DiPietro, J. A., & Strobino, D. M. (1994). Day‐care participation as a protective factor in the cognitive development of low‐income children.  Child development ,  65 (2), 457-471.

Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. Morawski (Ed.), The rise of experimentation in American psychology (pp. 188-212). New York: Oxford University Press.

Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research: A conceptual addendum. American Psychologist, 39(5) , 561.

Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research.  American psychologist ,  43 (1), 49.

The British Psychological Society. (2010). Code of Human Research Ethics. www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf

Further Information

  • MIT Psychology Ethics Lecture Slides

BPS Documents

  • Code of Ethics and Conduct (2018)
  • Good Practice Guidelines for the Conduct of Psychological Research within the NHS
  • Guidelines for Psychologists Working with Animals
  • Guidelines for ethical practice in psychological research online

APA Documents

APA Ethical Principles of Psychologists and Code of Conduct

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Sacred Heart University Library

Organizing Academic Research Papers: Design Flaws to Avoid

  • Purpose of Guide

Design Flaws to Avoid

  • Glossary of Research Terms
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Executive Summary
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tertiary Sources
  • What Is Scholarly vs. Popular?
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • How to Manage Group Projects
  • Multiple Book Review Essay
  • Reviewing Collected Essays
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Research Proposal
  • Acknowledgements

The research design establishes the decision-making processes, conceptual structure of investigation, and methods of analysis used to address the central research problem of your study.  Taking the time to develop a thorough research design helps to organize your thoughts, set the boundaries of your study, maximize the reliability of your findings, and avoid misleading or incomplete conclusions. Therefore, if any aspect of your research design is flawed or under-developed, the quality and reliability of your final results and, by extension, the overall value of your study will be weakened.

Here are some common problems to avoid when designing a research study.

  • Lack of Specificity -- do not describe aspects of your study in overly-broad generalities. It is important that you design a study that describes the process of investigation in clear and concise terms. Otherwise, the reader cannot be certain what you intend to do.
  • Poorly Defined Research Problem -- the starting point of most new research is to formulate a problem statement and begin the process of formulating questions about that problem. Your paper should outline and explicitly delimit the research problem and state what you intend to investigate.
  • Lack of Theoretical Framework -- the theoretical framework represents the conceptual foundation of your study. Therefore, your research design should include an explicit set of basic postulates or assumptions related to the research problem and an equally explicit set of logically derived hypotheses.
  • Significance -- the research design must include a clear answer to the "So What" question. Be sure you clearly articulate why your study is important and how it contributes to the larger body of literature about the topic of investigation.
  • Relationship between Past Research and Your Study -- do not simply offer a summary description of prior research. Your literature review should include an explicit statement linking the results of prior research to the research you are about  to undertake. This can be done, for example, by indentifying basic weaknesses in previous research studies and how your study helps to fill this gap in knowledge.
  • Contribution to the Field -- in linking to prior research, don't just note that a gap exists; be clear in describing how your study contributes to, or possibly challenges, existing assumptions or findings.
  • Provincialism -- this refers to designing a narrowly applied scope, geographical area, sampling, or method of analysis that unduly restricts your ability to create meaningful outcomes and, by extension, obtaining results that are relevant and possibly transferable to understanding phenomena in other settings.
  • Objectives, Hypotheses, or Questions -- your research design should include one or more questions or hypotheses that you are attempting to answer about the research problem underpinning your research. They should be clearly articulated and closely tied to the overall aims of your study.
  • Poor Method -- the design must include a well-developed and transparent plan for how you intend to collect or generate data and how it will be analyzed.
  • Proximity Sampling -- this refers to using a sample which is based not upon the purposes of your study, but rather, is based upon the proximity of a particular group of subjects. The units of analysis, whether they be persons, places, or things, must not be based solely on ease of access and convenience.
  • Techniques or Instruments -- be clear in describing the techniques [e.g., semi-structured interviews] or instruments [e.g., questionnaire] used to gather data. Your research design should note how the technique or instrument will provide reasonably reliable data to answer the questions associated with the central research problem.
  • Statistical Treatment -- in quantitative studies, you must give a complete description of how you will organize the raw data for analysis. In most cases, this involves describing the data through the measures of central tendencies like mean, median, and mode that help the researcher explain how the data are concentrated and, thus, leading to meaningful interpretations of key trends or patterns in the data.
  • Vocabulary -- research often contains jargon and specialized language that assumes the reader be familiar with. However, avoid overuse of technical or pseudo-technical terminology. Problems with vocabulary also can refer to the use of popular terms, cliche's, or culture-specific language that is inappropriate for academic writing.
  • Ethical Dilemmas -- in the methods section of qualitative research studies, your design must document how you intend to minimize risk for participants during stages of data gathering while, at the same time, still being able to adequately address the research problem. Failure to do so can lead the reader to question the validity and objectivity of your entire study.
  • Limitations of Study -- all studies have limitations and your research design should anticipate and explain the reasons why these limitations may exist. The description of results should also clearly describe the extent of missing data. In both cases, it is important to include a statement concerning what impact these limitations may have on the validity of your results.

Butin, Dan W. The Education Dissertation A Guide for Practitioner Scholars . Thousand Oaks, CA: Corwin, 2010; Carter, Susan. Structuring Your Research Thesis . New York: Palgrave Macmillan, 2012; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008. Lunsford, Andrea and Robert Connors. The St. Martin’s Handbook . 3rd ed. New York: St. Martin’s Press, 1995.

  • << Previous: Types of Research Designs
  • Next: Glossary of Research Terms >>
  • Last Updated: Jul 18, 2023 11:58 AM
  • URL: https://library.sacredheart.edu/c.php?g=29803
  • QuickSearch
  • Library Catalog
  • Databases A-Z
  • Publication Finder
  • Course Reserves
  • Citation Linker
  • Digital Commons
  • Our Website

Research Support

  • Ask a Librarian
  • Appointments
  • Interlibrary Loan (ILL)
  • Research Guides
  • Databases by Subject
  • Citation Help

Using the Library

  • Reserve a Group Study Room
  • Renew Books
  • Honors Study Rooms
  • Off-Campus Access
  • Library Policies
  • Library Technology

User Information

  • Grad Students
  • Online Students
  • COVID-19 Updates
  • Staff Directory
  • News & Announcements
  • Library Newsletter

My Accounts

  • Interlibrary Loan
  • Staff Site Login

Sacred Heart University

FIND US ON  

Enago Academy

The Consequences of Flawed Research

' src=

Flawed research, as opposed to deliberately fraudulent research , can be looked at from several angles:

  • The original identification of the topic can be based on a poor literature review that places greater emphasis on a perceived gap in the existing research that may not be worthy of such emphasis
  • The research protocol may be weakened by lack of experience among the research team
  • The research protocol may be weakened by lack of funding and resources
  • The research protocol is based on a flawed dataset
  • The research results cannot be replicated
  • The foundational research upon which this new study is to be based may have its own set of limitations that will only be exacerbated by a follow-on study
  • The data collected from the study may be poorly analyzed, generating results that prompt others into follow-on research that carries those flaws forward into another research protocol

A Staggering Lack of Reproducibility

According to the Economist , in 2012, biotech research leader Amgen found that they could only reproduce six out of 53 “landmark” studies in cancer research. Again, the drug company Bayer found a replication rate of only 25 percent among 67 equally important research papers.

The significance of such poor results only increases when you consider what happens further down the chain when such flawed research is allowed to take place.

From 2000 to 2010, it is estimated that 80,000 patients participated in clinical trials, either as paid participants or volunteers, based on research that was later retracted after mistakes or improprieties were discovered.

It’s About Cold, Hard Cash!

These days, research funding is getting increasingly harder to find without turning to the “ paid for performance ” model of corporate research. So, to a corporate mind, the idea that studies based on flawed research must be completely written off, represents a critical waste of money!

In the world of research, prestige and competence carries significant weight in attracting funding, and subject matter experts to lead research departments. Therefore, any whiff of failure or incompetence can do serious damage. For that reason, rather than dealing with the potential embarrassment of acknowledging that the protocol was flawed from the outset, flawed research may get conveniently ignored.

In simple words, it’s about the cold, hard cash! It’s in this pursuit that groundbreaking research works are allowed to and continue to get published.

But then the cost of preserving fragile reputations is heavy. When a research that is known to be flawed is allowed to persist, it pollutes the oceans of data upon which new researchers are forced to depend to build their own careers.

The Solution is Simple: It’s Quality over Quantity

The only satisfactory solution to this problem has to be a return to the quality of research. Let us go back to that rigorous search for truth upon which all science is actually based.

A transition from “ publish or perish ” to “do quality research or perish” should lead researchers away from the temptation of open access with questionable peer review practices, and the full transparency of research protocols should increase the accountability of all research team members.

Will a few reputations get damaged? Probably, but for many research institutions and a large number of research journals, that damage may be long overdue.

Rate this article Cancel Reply

Your email address will not be published.

the researcher should never report flaws

Enago Academy's Most Popular Articles

the researcher should never report flaws

  • AI in Academia
  • Industry News

Controversy Erupts Over AI-Generated Visuals in Scientific Publications

In recent months, the scientific community has been grappling with a growing trend: the integration…

ICMJE Update on Medical Journal Publication (January 2024)

  • Trending Now

ICMJE Updates Guidelines for Medical Journal Publication, Emphasizes on Inclusivity and AI Transparency

The International Committee of Medical Journal Editors (ICMJE) recently updated its recommendations for best practices…

China's Ministry of Education Spearheads Efforts to Uphold Academic Integrity

China’s Ministry of Education Spearheads Efforts to Uphold Academic Integrity

In response to the increase in retractions of research papers submitted by Chinese scholars to…

Difference between research ethics and ethics and compliance

  • Publishing Research
  • Understanding Ethics

Understanding the Difference Between Research Ethics and Compliance

Ethics refers to the principles, values, and moral guidelines that guide individual or group behavior…

Generative AI Guide

  • Thought Leadership

Launching a Practical Guide for Institutions on Using Generative AI in Research and Research Writing – Get Yours Now!

In an era where AI technologies like ChatGPT and similar generative AI tools are becoming…

Understanding Researcher Awareness and Compliance With Ethical Standards – A global…

Increasing Threat of Scientific Misconduct and Data Manipulation With AI

Mitigating Survivorship Bias in Scholarly Research: 10 tips to enhance data integrity

the researcher should never report flaws

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

the researcher should never report flaws

As a researcher, what do you consider most when choosing an image manipulation detector?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Identifying and Avoiding Bias in Research

This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review the scientific literature and avoid treatments which are suboptimal or potentially harmful. A thorough understanding of bias and how it affects study results is essential for the practice of evidence-based medicine.

The British Medical Journal recently called evidence-based medicine (EBM) one of the fifteen most important milestones since the journal's inception 1 . The concept of EBM was created in the early 1980's as clinical practice became more data-driven and literature based 1 , 2 . EBM is now an essential part of medical school curriculum 3 . For plastic surgeons, the ability to practice EBM is limited. Too frequently, published research in plastic surgery demonstrates poor methodologic quality, although a gradual trend toward higher level study designs has been noted over the past ten years 4 , 5 . In order for EBM to be an effective tool, plastic surgeons must critically interpret study results and must also evaluate the rigor of study design and identify study biases. As the leadership of Plastic and Reconstructive Surgery seeks to provide higher quality science to enhance patient safety and outcomes, a discussion of the topic of bias is essential for the journal's readers. In this paper, we will define bias and identify potential sources of bias which occur during study design, study implementation, and during data analysis and publication. We will also make recommendations on avoiding bias before, during, and after a clinical trial.

I. Definition and scope of bias

Bias is defined as any tendency which prevents unprejudiced consideration of a question 6 . In research, bias occurs when “systematic error [is] introduced into sampling or testing by selecting or encouraging one outcome or answer over others” 7 . Bias can occur at any phase of research, including study design or data collection, as well as in the process of data analysis and publication ( Figure 1 ). Bias is not a dichotomous variable. Interpretation of bias cannot be limited to a simple inquisition: is bias present or not? Instead, reviewers of the literature must consider the degree to which bias was prevented by proper study design and implementation. As some degree of bias is nearly always present in a published study, readers must also consider how bias might influence a study's conclusions 8 . Table 1 provides a summary of different types of bias, when they occur, and how they might be avoided.

An external file that holds a picture, illustration, etc.
Object name is nihms-198809-f0001.jpg

Major Sources of Bias in Clinical Research

Tips to avoid different types of bias during a trial.

Chance and confounding can be quantified and/or eliminated through proper study design and data analysis. However, only the most rigorously conducted trials can completely exclude bias as an alternate explanation for an association. Unlike random error, which results from sampling variability and which decreases as sample size increases, bias is independent of both sample size and statistical significance. Bias can cause estimates of association to be either larger or smaller than the true association. In extreme cases, bias can cause a perceived association which is directly opposite of the true association. For example, prior to 1998, multiple observational studies demonstrated that hormone replacement therapy (HRT) decreased risk of heart disease among post-menopausal women 8 , 9 . However, more recent studies, rigorously designed to minimize bias, have found the opposite effect (i.e., an increased risk of heart disease with HRT) 10 , 11 .

II. Pre-trial bias

Sources of pre-trial bias include errors in study design and in patient recruitment. These errors can cause fatal flaws in the data which cannot be compensated during data analysis. In this section, we will discuss the importance of clearly defining both risk and outcome, the necessity of standardized protocols for data collection, and the concepts of selection and channeling bias.

Bias during study design

The definition of risk and outcome should be clearly defined prior to study implementation. Subjective measures, such as the Baker grade of capsular contracture, can have high inter-rater variability and the arbitrary cutoffs may make distinguishing between groups difficult 12 . This can inflate the observed variance seen with statistical analysis, making a statistically significant result less likely. Objective, validated risk stratification models such as those published by Caprini 13 and Davison 14 for venous thromboembolism, or standardized outcomes measures such as the Breast-Q 15 should have lower inter-rater variability and are more appropriate for use. When risk or exposure is retrospectively identified via medical chart review, it is prudent to crossreference data sources for confirmation. For example, a chart reviewer should confirm a patient-reported history of sacral pressure ulcer closure with physical exam findings and by review of an operative report; this will decrease discrepancies when compared to using a single data source.

Data collection methods may include questionnaires, structured interviews, physical exam, laboratory or imaging data, or medical chart review. Standardized protocols for data collection, including training of study personnel, can minimize inter-observer variability when multiple individuals are gathering and entering data. Blinding of study personnel to the patient's exposure and outcome status, or if not possible, having different examiners measure the outcome than those who evaluated the exposure, can also decrease bias. Due to the presence of scars, patients and those directly examining them cannot be blinded to whether or not an operation was received. For comparisons of functional or aesthetic outcomes in surgical procedures, an independent examiner can be blinded to the type of surgery performed. For example, a hand surgery study comparing lag screw versus plate and screw fixation of metacarpal fractures could standardize the surgical approach (and thus the surgical scar) and have functional outcomes assessed by a blinded examiner who had not viewed the operative notes or x-rays. Blinded examiners can also review imaging and confirm diagnoses without examining patients 16 , 17 .

Selection bias

Selection bias may occur during identification of the study population. The ideal study population is clearly defined, accessible, reliable, and at increased risk to develop the outcome of interest. When a study population is identified, selection bias occurs when the criteria used to recruit and enroll patients into separate study cohorts are inherently different. This can be a particular problem with case-control and retrospective cohort studies where exposure and outcome have already occurred at the time individuals are selected for study inclusion 18 . Prospective studies (particularly randomized, controlled trials) where the outcome is unknown at time of enrollment are less prone to selection bias.

Channeling bias

Channeling bias occurs when patient prognostic factors or degree of illness dictates the study cohort into which patients are placed. This bias is more likely in non-randomized trials when patient assignment to groups is performed by medical personnel. Channeling bias is commonly seen in pharmaceutical trials comparing old and new drugs to one another 19 . In surgical studies, channeling bias can occur if one intervention carries a greater inherent risk 20 . For example, hand surgeons managing fractures may be more aggressive with operative intervention in young, healthy individuals with low perioperative risk. Similarly, surgeons might tolerate imperfect reduction in the elderly, a group at higher risk for perioperative complications and with decreased need for perfect hand function. Thus, a selection bias exists for operative intervention in young patients. Now imagine a retrospective study of operative versus non-operative management of hand fractures. In this study, young patients would be channeled into the operative study cohort and the elderly would be channeled into the nonoperative study cohort.

III. Bias during the clinical trial

Information bias is a blanket classification of error in which bias occurs in the measurement of an exposure or outcome. Thus, the information obtained and recorded from patients in different study groups is unequal in some way 18 . Many subtypes of information bias can occur, including interviewer bias, chronology bias, recall bias, patient loss to follow-up, bias from misclassification of patients, and performance bias.

Interviewer bias

Interviewer bias refers to a systematic difference between how information is solicited, recorded, or interpreted 18 , 21 . Interviewer bias is more likely when disease status is known to interviewer. An example of this would be a patient with Buerger's disease enrolled in a case control study which attempts to retrospectively identify risk factors. If the interviewer is aware that the patient has Buerger's disease, he/she may probe for risk factors, such as smoking, more extensively (“Are you sure you've never smoked? Never? Not even once?”) than in control patients. Interviewer bias can be minimized or eliminated if the interviewer is blinded to the outcome of interest or if the outcome of interest has not yet occurred, as in a prospective trial.

Chronology bias

Chronology bias occurs when historic controls are used as a comparison group for patients undergoing an intervention. Secular trends within the medical system could affect how disease is diagnosed, how treatments are administered, or how preferred outcome measures are obtained 20 . Each of these differences could act as a source of inequality between the historic controls and intervention groups. For example, many microsurgeons currently use preoperative imaging to guide perforator flap dissection. Imaging has been shown to significantly reduce operative time 40 . A retrospective study of flap dissection time might conclude that dissection time decreases as surgeon experience improves. More likely, the use of preoperative imaging caused a notable reduction in dissection time. Thus, chronology bias is present. Chronology bias can be minimized by conducting prospective cohort or randomized control trials, or by using historic controls from only the very recent past.

Recall bias

Recall bias refers to the phenomenon in which the outcomes of treatment (good or bad) may color subjects' recollections of events prior to or during the treatment process. One common example is the perceived association between autism and the MMR vaccine. This vaccine is given to children during a prominent period of language and social development. As a result, parents of children with autism are more likely to recall immunization administration during this developmental regression, and a causal relationship may be perceived 22 . Recall bias is most likely when exposure and disease status are both known at time of study, and can also be problematic when patient interviews (or subjective assessments) are used as a primary data sources. When patient-report data are used, some investigators recommend that the trial design masks the intent of questions in structured interviews or surveys and/or uses only validated scales for data acquisition 23 .

Transfer bias

In almost all clinical studies, subjects are lost to follow-up. In these instances, investigators must consider whether these patients are fundamentally different than those retained in the study. Researchers must also consider how to treat patients lost to follow-up in their analysis. Well designed trials usually have protocols in place to attempt telephone or mail contact for patients who miss clinic appointments. Transfer bias can occur when study cohorts have unequal losses to follow-up. This is particularly relevant in surgical trials when study cohorts are expected to require different follow-up regimens. Consider a study evaluating outcomes in inferior pedicle Wise pattern versus vertical scar breast reductions. Because the Wise pattern patients often have fewer contour problems in the immediate postoperative period, they may be less likely to return for long-term follow-up. By contrast, patient concerns over resolving skin redundancies in the vertical reduction group may make these individuals more likely to return for postoperative evaluations by their surgeons. Some authors suggest that patient loss to follow-up can be minimized by offering convenient office hours, personalized patient contact via phone or email, and physician visits to the patient's home 20 , 24 .

Bias from misclassification of exposure or outcome

Misclassification of exposure can occur if the exposure itself is poorly defined or if proxies of exposure are utilized. For example, this might occur in a study evaluating efficacy of becaplermin (Regranex, Systagenix Wound Management) versus saline dressings for management of diabetic foot ulcers. Significantly different results might be obtained if the becaplermin cohort of patients included those prescribed the medication, rather than patients directly observed to be applying the medication. Similarly, misclassification of outcome can occur if non-objective measures are used. For example, clinical signs and symptoms are notoriously unreliable indicators of venous thromboembolism. Patients are accurately diagnosed by physical exam less than 50% of the time 25 . Thus, using Homan's sign (calf pain elicited by extreme dorsi-flexion) or pleuritic chest pain as study measures for deep venous thrombosis or pulmonary embolus would be inappropriate. Venous thromboembolism is appropriately diagnosed using objective tests with high sensitivity and specificity, such as duplex ultrasound or spiral CT scan 26 - 28 .

Performance bias

In surgical trials, performance bias may complicate efforts to establish a cause-effect relationship between procedures and outcomes. As plastic surgeons, we are all aware that surgery is rarely standardized and that technical variability occurs between surgeons and among a single surgeon's cases. Variations by surgeon commonly occur in surgical plan, flow of operation, and technical maneuvers used to achieve the desired result. The surgeon's experience may have a significant effect on the outcome. To minimize or avoid performance bias, investigators can consider cluster stratification of patients, in which all patients having an operation by one surgeon or at one hospital are placed into the same study group, as opposed to placing individual patients into groups. This will minimize performance variability within groups and decrease performance bias. Cluster stratification of patients may allow surgeons to perform only the surgery with which they are most comfortable or experienced, providing a more valid assessment of the procedures being evaluated. If the operation in question has a steep learning curve, cluster stratification may make generalization of study results to the everyday plastic surgeon difficult.

IV. Bias after a trial

Bias after a trial's conclusion can occur during data analysis or publication. In this section, we will discuss citation bias, evaluate the role of confounding in data analysis, and provide a brief discussion of internal and external validity.

Citation bias

Citation bias refers to the fact that researchers and trial sponsors may be unwilling to publish unfavorable results, believing that such findings may negatively reflect on their personal abilities or on the efficacy of their product. Thus, positive results are more likely to be submitted for publication than negative results. Additionally, existing inequalities in the medical literature may sway clinicians' opinions of the expected trial results before or during a trial. In recognition of citation bias, the International Committee of Medical Journal Editors(ICMJE) released a consensus statement in 2004 29 which required all randomized control trials to be pre-registered with an approved clinical trials registry. In 2007, a second consensus statement 30 required that all prospective trials not deemed purely observational be registered with a central clinical trials registry prior to patient enrollment. ICMJE member journals will not publish studies which are not registered in advance with one of five accepted registries. Despite these measures, citation bias has not been completely eliminated. While centralized documentation provides medical researchers with information about unpublished trials, investigators may be left to only speculate as to the results of these studies.

Confounding

Confounding occurs when an observed association is due to three factors: the exposure, the outcome of interest, and a third factor which is independently associated with both the outcome of interest and the exposure 18 . Examples of confounders include observed associations between coffee drinking and heart attack (confounded by smoking) and the association between income and health status (confounded by access to care). Pre-trial study design is the preferred method to control for confounding. Prior to the study, matching patients for demographics (such as age or gender) and risk factors (such as body mass index or smoking) can create similar cohorts among identified confounders. However, the effect of unmeasured or unknown confounders may only be controlled by true randomization in a study with a large sample size. After a study's conclusion, identified confounders can be controlled by analyzing for an association between exposure and outcome only in cohorts similar for the identified confounding factor. For example, in a study comparing outcomes for various breast reconstruction options, the results might be confounded by the timing of the reconstruction (i.e., immediate versus delayed procedures). In other words, procedure type and timing may have both have significant and independent effects on breast reconstruction outcomes. One approach to this confounding would be to compare outcomes by procedure type separately for immediate and delayed reconstruction patients. This maneuver is commonly termed a “stratified” analysis. Stratified analyses are limited if multiple confounders are present or if sample size is small. Multi-variable regression analysis can also be used to control for identified confounders during data analysis. The role of unidentified confounders cannot be controlled using statistical analysis.

Internal vs. External Validity

Internal validity refers to the reliability or accuracy of the study results. A study's internal validity reflects the author's and reviewer's confidence that study design, implementation, and data analysis have minimized or eliminated bias and that the findings are representative of the true association between exposure and outcome. When evaluating studies, careful review of study methodology for sources of bias discussed above enables the reader to evaluate internal validity. Studies with high internal validity are often explanatory trials, those designed to test efficacy of a specific intervention under idealized conditions in a highly selected population. However, high internal validity often comes at the expense of ability to be generalized. For example, although supra-microsurgery techniques, defined as anastamosis of vessels less than 0.5mm-0.8mm in diameter, have been shown to be technically possible in high volume microsurgery centers 31 - 33 (high internal validity), it is unlikely that the majority of plastic surgeons could perform this operation with an acceptable rate of flap loss.

External validity of research design deals with the degree to which findings are able to be generalized to other groups or populations. In contrast with explanatory trials, pragmatic trials are designed to assess the benefits of interventions under real clinical conditions. These studies usually include study populations generated using minimal exclusion criteria, making them very similar to the general population. While pragmatic trials have high external validity, loose inclusion criteria may compromise the study's internal validity. When reviewing scientific literature, readers should assess whether the research methods preclude generalization of the study's findings to other patient populations. In making this decision, readers must consider differences between the source population (population from which the study population originated) and the study population (those included in the study). Additionally, it is important to distinguish limited ability to be generalized due to a selective patient population from true bias 8 .

When designing trials, achieving balance between internal and external validity is difficult. An ideal trial design would randomize patients and blind those collecting and analyzing data (high internal validity), while keeping exclusion criteria to a minimum, thus making study and source populations closely related and allowing generalization of results (high external validity) 34 . For those evaluating the literature, objective models exist to quantify both external and internal validity. Conceptual models to assess a study's ability to be generalized have been developed 35 . Additionally, qualitative checklists can be used to assess the external validity of clinical trials. These can be utilized by investigators to improve study design and also by those reading published studies 36 .

The importance of internal validity is reflected in the existing concept of “levels of evidence” 5 , where more rigorously designed trials produce higher levels of evidence. Such high-level studies can be evaluated using the Jadad scoring system, an established, rigorous means of assessing the methodological quality and internal validity of clinical trials 37 . Even so-called “gold-standard” RCT's can be undermined by poor study design. Like all studies, RCT's must be rigorously evaluated. Descriptions of study methods should include details on the randomization process, method(s) of blinding, treatment of incomplete outcome data, funding source(s), and include data on statistically insignificant outcomes 38 . Authors who provide incomplete trial information can create additional bias after a trial ends; readers are not able to evaluate the trial's internal and external validity 20 . The CONSORT statement 39 provides a concise 22-point checklist for authors reporting the results of RCT's. Manuscripts that conform to the CONSORT checklist will provide adequate information for readers to understand the study's methodology. As a result, readers can make independent judgments on the trial's internal and external validity.

Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review the scientific literature and avoid treatments which are suboptimal or potentially harmful. A thorough understanding of bias and how it affects study results is essential for the practice of evidence-based medicine.

Acknowledgments

Dr. Pannucci receives salary support from the NIH T32 grant program (T32 GM-08616).

Meeting disclosure:

This work was has not been previously presented.

None of the authors has a financial interest in any of the products, devices, or drugs mentioned in this manuscript.

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Subscribe or renew today

Every print subscription comes with full digital access

Science News

To make science better, watch out for statistical flaws, share this:.

By Tom Siegfried

February 7, 2014 at 1:30 pm

First of two parts

As Winston Churchill once said about democracy, it’s the worst form of government, except for all the others. Science is like that. As commonly practiced today, science is a terrible way for gathering knowledge about nature, especially in messy realms like medicine. But it would be very unwise to vote science out of office, because all the other methods are so much worse.

Still, science has room for improvement, as its many critics are constantly pointing out. Some of those critics are, of course, lunatics who simply prefer not to believe solid scientific evidence if they dislike its implications. But many critics of science have the goal of making the scientific enterprise better, stronger and more reliable. They are justified in pointing out that scientific methodology — in particular, statistical techniques for testing hypotheses — have more flaws than Facebook’s privacy policies. One especially damning analysis , published in 2005, claimed to have proved that more than half of published scientific conclusions were actually false.

A few months ago, though, some defenders of the scientific faith produced a new study  claiming otherwise. Their survey of five major medical journals indicated a false discovery rate among published papers of only 14 percent. “Our analysis suggests that the medical literature remains a reliable record of scientific progress,” Leah Jager of the U.S. Naval Academy and Jeffrey Leek of Johns Hopkins University wrote in the journal Biostatistics .

Their finding is based on an examination of P values, the probability of getting a positive result if there is no real effect (an assumption called the null hypothesis). By convention, if the results you get (or more extreme results) would occur less than 5 percent of the time by chance (P value less than .05), then your finding is “statistically significant.” Therefore you can reject the assumption that there was no effect, conclude you have found a true effect and get your paper published.

As Jager and Leek acknowledge, though, this method has well-documented flaws . “There are serious problems with interpreting individual P values as evidence for the truth of the null hypothesis,” they wrote.

For one thing, a 5 percent significance level isn’t a very stringent test. Using that rate you could imagine getting one wrong result for every 20 studies, and with thousands of scientific studies going on, that adds up to a lot. But it’s even worse. If there actually is no real effect in most experiments, you’ll reach a wrong conclusion far more than 5 percent of the time. Suppose you test 100 drugs for a given disease, when only one actually works. Using a P value of .05, those 100 tests could give you six positive results — the one correct drug and five flukes. More than 80 percent of your supposed results would be false.

But while a P value in any given paper may be unreliable, analyzing aggregates of P values for thousands of papers can give a fair assessment of how many conclusions of significance are likely to be bogus, Jager and Leek contend. “There are well established and statistically sound methods for estimating the rate of false discoveries among an aggregated set of tested hypotheses using P values.”

It’s sophisticated methodology. It takes into account the fact that some studies report a very strong statistical significance, with P value much smaller than .05. So the 1-in-20 fluke argument doesn’t necessarily apply. Yes, a P value of .05 means there’s a 1-in-20 chance that your results (or even more extreme results) would show up even if there was no effect. But that doesn’t mean 1 in 20 (or 5 percent) of all studies are wrong, because many studies report data at well below the .05 significance level.

So Jager and Leek recorded actual P values reported in more than 5,000 medical papers published from 2000 to 2010 in journals such as the Lancet , the Journal of the American Medical Association and the New England Journal of Medicine . An algorithm developed to calculate the false discovery rate found a range of 11 percent to 19 percent for the various journals.     

“Our results suggest that while there is an inflation of false discovery results above the nominal 5 percent level … the relatively minor inflation in error rates does not merit the claim that most published research is false,” Jager and Leek concluded.

Well, maybe.

But John Ioannidis, author of the 2005 study claiming most results are wrong, was not impressed. In fact, he considers Jager and Leek’s paper to fall into the “false results” category. “Their approach is flawed in sampling, calculations and conclusions,” Ioannidis wrote in a commentary  also appearing in Biostatistics .

For one thing, Jager and Leek selected only five very highly regarded journals, a small sample, not randomly selected from the thousands of medical journals published these days. And out of more than  77,000 papers published over the study period, the automated procedure for identifying P values in the abstracts found only 5,322 usable for the study’s purposes. More than half of those papers reported randomized controlled trials or were systematic reviews — the types of papers least likely to be in error. Those types account for less than 5 percent of all published papers. Furthermore, recording only those P values given in abstracts further compounds the sampling bias, as abstracts are typically selective in reporting only the most dramatic results from a study.

Of course, Ioannidis is not exactly an unbiased observer, as it was his paper the new study was attempting to refute. Some other commenters were not quite as harsh. But they nevertheless identified shortcomings. Steven Goodman of Stanford University pointed out some of the same weaknesses that Ioannidis cited.

“Jager and Leek’s attempt to bring a torrent of empirical data and rigorous statistical analyses to bear

on this important question is a major step forward,” Goodman wrote  in Biostatistics . “Its weaknesses are less important than its positive contributions.” Still, Goodman suggested that the true rate of false positives is higher than Jager and Leek found, while less than what Ioannidis claimed.

Two other statisticians, also commenting  in Biostatistics , reached similar conclusions. Problems with the Jager and Leek study could push the false discovery rate from 14 percent to 30 percent or higher, wrote Yoav Benjamini and Yotam Hechtlinger of Tel Aviv University.

Even one slight adjustment to Jager and Leek’s analysis (including “less than or equal to .05” instead of just “equal to .05”) raised the false discovery rate from 14 percent to 20 percent, Benjamini and Hechtlinger pointed out. Other factors, such as those identified by Ioannidis and Goodman, would drive the false discovery rate even higher, perhaps as high as 50 percent. So maybe Ioannidis was right, after all.

Of course, that’s not really the point. Whether more or less than half of all medical studies are wrong is not exactly the key issue here. It’s not a presidential election. What matters here is the fact that medical science is so unsure of its facts. Knowing that a lot of studies are wrong is not very comforting, especially when you don’t know which ones are the wrong ones.

“We think that the study of Jager and Leek is enough to point at the serious problem we face,” Benjamini and Hechtlinger note. “Even though most findings may be true, whether the science-wise false discovery rate is at the more realistic 30 percent or higher, or even at the optimistic 20 percent, it is certainly too high.”

But there’s another issue, too. As Goodman notes, claiming that more than half of medical research is false can produce “an unwarranted degree of skepticism, hopefully not cynicism, about truth claims in medical science.” If people stop trusting medical science, they turn to those even worse sources of knowledge that lead to serious consequences (such as children not getting proper vaccinations).

Part of the resolution of this conundrum is the realization that individual studies do not establish medical knowledge. Replication of results, convergence of conclusions from different kinds of studies, real world experience in clinics, judgments by knowledgeable practitioners aware of all the relevant considerations and various other elements of evidence all accrue to create a sound body of knowledge for medical practice. It’s just not as sound as it needs to be. Criticizing the flaws in current scientific practice, and developing methods to address and correct those flaws, is an important enterprise that shouldn’t be dismissed on the basis of any one study, especially when it’s based on P values.

Follow me on Twitter: @tom_siegfried

More Stories from Science News on Math

An illustration of bacterial molecules forming a triangular fractal.

Scientists find a naturally occurring molecule that forms a fractal

An image of grey numbers piled on top of each other. All numbers are grey except for the visible prime numbers of 5, 11, 17, 23 and 29, which are highlighted blue.

How two outsiders tackled the mystery of arithmetic progressions

A predicted quasicrystal is based on the ‘einstein’ tile known as the hat.

The produce section of a grocery store with lots of fruit and vegetables on sloped displays

Here’s how much fruit you can take from a display before it collapses

A simulation of the cosmic web

Here are some astounding scientific firsts of 2023

An image showing 1+1=2 with other numbers fading into the black background.

‘Is Math Real?’ asks simple questions to explore math’s deepest truths

An illustration of a green Möbius strip, a loop of paper with a half-twist in it.

An enduring Möbius strip mystery has finally been solved

A photo of a variety of different colored textiles with adinkras, used in Ghana's Twi language to express proverbs, stamped in black ink.

Non-Western art and design can reveal alternate ways of thinking about math

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Advertisement

  • Publications

This site uses cookies to enhance your user experience. By continuing to use this site you are agreeing to our COOKIE POLICY .

Grab your lab coat. Let's get started

Create an account below to get 6 c&en articles per month, receive newsletters and more - all free., it seems this is your first time logging in online. please enter the following information to continue., as an acs member you automatically get access to this site. all we need is few more details to create your reading experience., not you sign in with a different account..

Password and Confirm password must match.

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

Already have an ACS ID? Log in here

The key to knowledge is in your (nitrile-gloved) hands

Access more articles now. choose the acs option that’s right for you..

Already an ACS Member? Log in here  

$0 Community Associate

ACS’s Basic Package keeps you connected with C&EN and ACS.

  • Access to 6 digital C&EN articles per month on cen.acs.org
  • Weekly delivery of the C&EN Essential newsletter

$80 Regular Members & Society Affiliates

ACS’s Standard Package lets you stay up to date with C&EN, stay active in ACS, and save.

  • Access to 10 digital C&EN articles per month on cen.acs.org
  • Weekly delivery of the digital C&EN Magazine
  • Access to our Chemistry News by C&EN mobile app

$160 Regular Members & Society Affiliates $55 Graduate Students $25 Undergraduate Students

ACS’s Premium Package gives you full access to C&EN and everything the ACS Community has to offer.

  • Unlimited access to C&EN’s daily news coverage on cen.acs.org
  • Weekly delivery of the C&EN Magazine in print or digital format
  • Significant discounts on registration for most ACS-sponsored meetings

the researcher should never report flaws

Your account has been created successfully, and a confirmation email is on the way.

Your username is now your ACS ID.

  • Are lab safety violations research misconduct?

New paper suggests they are, given their connection to the research enterprise

By dalmeet singh chawla, special to c&en, may 23, 2024 | a version of this story appeared in volume 102, issue 16.

  • UC Davis chemistry professor is fired
  • UAE university breaks ties with beleaguered nanoscientist
  • Claims of water turning into hydrogen peroxide spark debate
  • Former chemistry professor jailed for making meth

A laboratory bench stocked with trays of labeled bottles of chemicals, shelves filled with clean lab glassware, and a centrifuge.

The US Office of Research Integrity defines research misconduct as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.”

But should that definition also include violations of safety in the lab? That’s what Bor Luen Tang, a biochemist at the National University of Singapore, suggests in a paper published last month in the Journal of Academic Ethics (2024, DOI: 10.1007/s10805-024-09531-w ).

“Lab safety violations should be considered as such because these transgressions occur within the context of research, and can negatively impact research,” Tang tells C&EN in an email.

Researchers with adequate training and knowledge are obliged to follow safety rules and protocols, Tang says. “Thus, if someone deliberately violate[s] or deviate[s] markedly from safety rules/protocols and an accident/incident occurs, that person is potentially culpable.”

Finding a researcher guilty of violating lab safety rules, as with allforms of research misconduct, requires a preponderance of evidence regarding intent, Tang says. For instance, investigators should consider whether the violation occurred involuntarily, perhaps because of physical or mental illness or natural mishap, he adds.

In Tang’s opinion, researchers who are found guilty of lab safety violations should be sanctioned, with their institution’s health and safety office being the first authority to take action. Depending on the laws within a country, governmental agencies should also be involved, as well as funding agencies that supported the work, if they specify oversight of their grantees’ actions, Tang says.

But Craig Merlic, an organic chemist at the University of California, Los Angeles, and director of the UC Center for Laboratory Safety, says he doesn’t think lab safety violations should be labeled as research misconduct, though he does agree that egregious breaches of safety policies should have consequences.

“While the severity of any falsification, fabrication, and plagiarism can vary greatly, the defining actions are fairly straightforward,” he says. But it’s harder to determine what a violation of lab safety is, Merlic says. “Is it an action that results in an incident such as a chemical spill, or an accident that results in an injury? Or can it be merely not following best lab practices set by a lab?”

In his paper, Tang suggests that everyone, regardless of their endowment, status, or power, should be held accountable for breaches of lab safety. “But we all know from root cause analyses that culpability rarely stops at the immediate players,” Merlic says.

Merlic, who just wrote UCLA’s policy for student noncompliance with safety rules, which is not yet posted, thinks simple compliance should not be the end goal of safety programs. “Instead, compliance should be the natural outcome of an institution’s robust culture of safety that goes well beyond regulatory compliance,” he says.

You might also like...

Serving the chemical, life science, and laboratory worlds

Sign up for C&EN's must-read weekly newsletter

Contact us to opt out anytime

  • Share on Facebook
  • Share on Linkedin
  • Share on Reddit

This article has been sent to the following recipient:

Join the conversation

Contact the reporter

Submit a Letter to the Editor for publication

Engage with us on Twitter

The power is now in your (nitrile gloved) hands

Sign up for a free account to get more articles. or choose the acs option that’s right for you..

Already have an ACS ID? Log in

Create a free account To read 6 articles each month from

Join acs to get even more access to.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 21 May 2024

Pay researchers to spot errors in published papers

the researcher should never report flaws

  • Malte Elson 0

Malte Elson is an associate professor of the psychology of digitalization at the University of Bern, Switzerland.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

In 2023, Google awarded a total of US$10 million to researchers who found vulnerabilities in its products. Why? Because allowing errors to go undetected could be much costlier. Data breaches could lead to refund claims, reduced customer trust or legal liability.

It’s not just private technology companies that invest in such ‘bug bounty’ programmes. Between 2016 and 2021, the US Department of Defense awarded more than US$650,000 to people who found weaknesses in its networks .

Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering.

the researcher should never report flaws

Retractions are increasing, but not enough

That’s why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj ). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward — up to a maximum of 2,500 francs.

Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work.

ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed , revealing minor errors.

I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient.

the researcher should never report flaws

Structure peer review to make it more robust

Unpaid peer reviewers are overburdened , and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors , and doing so can see people marked out as troublemakers.

Yet failing to keep abreast of this issue comes at a huge cost. Imagine a single PhD student building their work on an erroneous finding. In Switzerland, their cumulative salary alone will run to six figures. Flawed research that is translated into health care, policymaking or engineering can harm people. And there are opportunity costs — for every grant awarded to a project unknowingly building on errors, another project is not pursued.

Like technology companies, stakeholders in science must realize that making error detection and correction part of the scientific landscape is a sound investment.

Funders, for instance, have a vested interest in ensuring that the money that they distribute as grants is not wasted. Publishers stand to improve their reputations by ensuring that some of their resources are spent on quality management. And, by supporting these endeavours, scientific associations could help to foster a culture in which acknowledgement of errors is considered normal — or even commendable — and not a mark of shame.

the researcher should never report flaws

How ‘research impact bonds’ could transform science funding

I know that ERROR is a bold experiment. Some researchers might have qualms. I’ve been asked whether reviewers might exaggerate the gravity of errors in pursuit of a large bug bounty, or attempt to smear a colleague they dislike. It’s possible, but hyperbole would be a gamble, given that all reviewer reports are published on our website and are not anonymized. And we guard against exaggeration. A ‘recommender’ from among ERROR’s staff and advisory board members — none of whom receive a bounty — acts as an intermediary, weighing up reviewer findings and author responses before deciding on the payout.

Another fair criticism is that ERROR’s paper selection will be biased. The ERROR team picks papers that are highly cited and checks them only if the authors agree to it. Authors who suspect their work might not withstand scrutiny could be less likely to opt in. But selecting papers at random would introduce a different bias, because we would be able to assess only those for which some minimal amount of data and code was freely available. And we’d spend precious resources checking some low-impact papers that only a few people build research on.

My goal is not to prove that a bug-bounty programme is the best mechanism for correcting errors, or that it is applicable to all science. Rather, I want to start a conversation about the need for dedicated investment in error detection and correction. There are alternatives to bug bounties — for instance, making error detection its own viable career path and hiring full-time scientific staff to check each institute’s papers. Of course, care would be needed to ensure that such schemes benefited researchers around the world equally.

Scholars can’t expect errors to go away by themselves. Science can be self-correcting — but only if we invest in making it so.

Nature 629 , 730 (2024)

doi: https://doi.org/10.1038/d41586-024-01465-y

Reprints and permissions

Competing Interests

The author declares no competing interests.

Related Articles

the researcher should never report flaws

  • Scientific community
  • Research management
  • Peer review

How I run a virtual lab group that’s collaborative, inclusive and productive

How I run a virtual lab group that’s collaborative, inclusive and productive

Career Column 31 MAY 24

Biomedical paper retractions have quadrupled in 20 years — why?

Biomedical paper retractions have quadrupled in 20 years — why?

News 31 MAY 24

What is science? Tech heavyweights brawl over definition

What is science? Tech heavyweights brawl over definition

Researcher parents are paying a high price for conference travel — here’s how to fix it

Researcher parents are paying a high price for conference travel — here’s how to fix it

Career Column 27 MAY 24

How researchers in remote regions handle the isolation

How researchers in remote regions handle the isolation

Career Feature 24 MAY 24

Who will make AlphaFold3 open source? Scientists race to crack AI model

Who will make AlphaFold3 open source? Scientists race to crack AI model

News 23 MAY 24

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Nature Index 01 MAY 24

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Nature Index 25 APR 24

Associate Editor, High-energy physics

As an Associate Editor, you will independently handle all phases of the peer review process and help decide what will be published.

Homeworking

American Physical Society

the researcher should never report flaws

Postdoctoral Fellowships: Immuno-Oncology

We currently have multiple postdoctoral fellowship positions available within our multidisciplinary research teams based In Hong Kong.

Hong Kong (HK)

Centre for Oncology and Immunology

the researcher should never report flaws

Chief Editor

Job Title:  Chief Editor Organisation: Nature Ecology & Evolution Location: New York, Berlin or Heidelberg - Hybrid working Closing date: June 23rd...

New York City, New York (US)

Springer Nature Ltd

the researcher should never report flaws

Global Talent Recruitment (Scientist Positions)

Global Talent Gathering for Innovation, Changping Laboratory Recruiting Overseas High-Level Talents.

Beijing, China

Changping Laboratory

the researcher should never report flaws

Postdoctoral Associate - Amyloid Strain Differences in Alzheimer's Disease

Houston, Texas (US)

Baylor College of Medicine (BCM)

the researcher should never report flaws

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

IMAGES

  1. The Forensic Psychology Report

    the researcher should never report flaws

  2. 21 Research Objectives Examples (Copy and Paste)

    the researcher should never report flaws

  3. Various Phases Involved in Publishing a Research

    the researcher should never report flaws

  4. Research Paper Publication

    the researcher should never report flaws

  5. Solved Spot the flaws! A researcher wants to test for a

    the researcher should never report flaws

  6. (PDF) Things that the researcher should consider while deciding on

    the researcher should never report flaws

VIDEO

  1. Multiple Flaws With E&B Report Says Minister

  2. Audio leak Commission stopped working, Zafar Naqvi ZN News

COMMENTS

  1. Design Flaws to Avoid

    The scope of your research should be clearly defined, but not to the point of where you cannot extrapolate findings in a meaningful way applied to better understanding the research problem. Objectives, Hypotheses, or Questions -- your research design should include one or more questions or hypotheses that you are attempting to answer about the ...

  2. Is my study useless? Why researchers need methodological ...

    Should researchers have the freedom to perform research that is a waste of time? Currently, the answer is a resounding 'yes'. Or at least, no one stops to ask whether there are obvious ...

  3. The Dangers of Flawed Methodology in Research

    The year 2020 has been like no other. Because of COVID-19 infections, nurses have been pushed to their limit and beyond. The suffering of humankind and associated toll on nurses will likely be everlasting. The long-term consequences of COVID-19 are yet to surface. 1 A recent survey of frontline nurses revealed psychological distress of varying ...

  4. Multiple Choice Questions in Business Research

    4. The researcher should never report flaws in procedural design and estimate their effect on the findings. True; False; 5. Adequate analysis of the data is the least difficult phase of research for the novice. True; False; 6. The validity and reliability of the data should be checked occasionally. True; False; 7.

  5. How to Write Limitations of the Study (with examples)

    Common types of limitations and their ramifications include: Theoretical: limits the scope, depth, or applicability of a study. Methodological: limits the quality, quantity, or diversity of the data. Empirical: limits the representativeness, validity, or reliability of the data. Analytical: limits the accuracy, completeness, or significance of ...

  6. What is a fatal flaw? A guide for authors & reviewers

    When look-ing for a detailed definition that should resonate with Ph.D. stu-dents and younger scholars, this one may be the most complete. I define fatal flaw as an aspect of the research design that cannot be corrected now that the study is completed. Fatal flaws can occur via several avenues.

  7. On getting it right by being wrong: A case study of how flawed research

    This was the prediction target for 1979 that they expected their participants to report. ... a closer look at the study's design reveals three critical flaws that were never adequately addressed in any subsequent literature. ... erred in anything but the best of faith, decades of subsequent research should not have overlooked ...

  8. Nine pitfalls of research misconduct

    Incrementalism. "It's only a single data point I'm excluding, and just this once.". Embarrassment. "I don't want to look foolish for not knowing how to do this.". Stupid systems ...

  9. On the Willingness to Report and the Consequences of Reporting Research

    While attention to research integrity has been growing over the past decades, the processes of signalling and denouncing cases of research misconduct remain largely unstudied. In this article, we develop a theoretically and empirically informed understanding of the causes and consequences of reporting research misconduct in terms of power relations. We study the reporting process based on a ...

  10. How to Understand Flaws in Clinical Research

    Being able to recognize and understand flaws that commonly arise in clinical research is an essential skill for academic faculty in clinical departments. There are a number of important issues or problems that seriously limit one's ability to trust the published outcomes in clinical research (Table 1) as authoritative. Table 1 Some general ...

  11. When and why do people act on flawed science? Effects of anecdotes and

    We deliberately planted experimental flaws in the scientific methods (i.e., procedures, type of control group, validity of measures) and errors in interpretation of results in all articles. There were three main categories of design flaws, including non-random assignment/sampling bias, other types of confounds and invalid measures.

  12. Gaps, Flaws, and Limitations

    Gaps, Flaws, and Limitations. All primary research will contain gaps (unexplored ideas), flaws (problems with study design), and limitations (factors that constrain the applicability of study findings). In fact, most academic articles will divulge these limitations when discussing their study design or results.

  13. PDF Flaws in Research Design

    provisions. This means that flaws exist in the design but in the opinion of the evaluators the flaws can be corrected. these cases the research coordinator acquaints the investigator with the stipulated provisions and works with him to the proposal so that a contract can be negotiated. One of my former colleagues, Gerald Smith, now at.

  14. 6 Common Flaws To Look Out For in Peer Review

    Here are the 6 common flaws to look out for in peer review: 1) Inappropriate study design for the study aims. 2) Unexplained deviations from standard/best practice and methodologies. 3) Over-interpretation of results. 4) Commenting beyond the scope of the article. 5) Lack of evidence to support conclusions.

  15. Ethical Considerations in Psychology Research

    The research team. There are examples of researchers being intimidated because of the line of research they are in. The institution in which the research is conducted. salso suggest there are 4 main ethical concerns when conducting SSR: The research question or hypothesis. The treatment of individual participants.

  16. Common Pitfalls In The Research Process

    Conducting research from planning to publication can be a very rewarding process. However, multiple preventable setbacks can occur within each stage of research. While these inefficiencies are an inevitable part of the research process, understanding common pitfalls can limit those hindrances. Many issues can present themselves throughout the research process. It has been said about academics ...

  17. Criteria of Good Research

    4. The researcher should report, with complete frankness, flaws in the procedural design and es timate their effect upon the findings. There are very few perfect research designs. Some of the imperfections may have little effect upon the validity and reliability of the data; others may invalidate them entirely. A competent researcher should be ...

  18. When Research Evidence is Misleading

    When Research Evidence is Misleading. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2 (8):e124. Each year, millions of research hypotheses are tested. Datasets are analyzed in ad hoc and exploratory ways. Quasi-experimental, single-center, before and after studies are enthusiastically performed.

  19. Flaws (and quality) in research today: can artificial intelligence

    Abraham Pouliakis. The existing flaws in both conducting and reporting of research have been outlined and criticized in the past. Weak research design, poor methodology, lack of fresh ideas and poor reporting are the main points to blame. Issues have been continually raised on the types of results published, review process, sponsorship, notion ...

  20. Organizing Academic Research Papers: Design Flaws to Avoid

    Therefore, your research design should include an explicit set of basic postulates or assumptions related to the research problem and an equally explicit set of logically derived hypotheses. Significance-- the research design must include a clear answer to the "So What" question. Be sure you clearly articulate why your study is important and ...

  21. The Consequences of Flawed Research

    The foundational research upon which this new study is to be based may have its own set of limitations that will only be exacerbated by a follow-on study; The data collected from the study may be poorly analyzed, generating results that prompt others into follow-on research that carries those flaws forward into another research protocol

  22. Identifying and Avoiding Bias in Research

    Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...

  23. To make science better, watch out for statistical flaws

    It takes into account the fact that some studies report a very strong statistical significance, with P value much smaller than .05. So the 1-in-20 fluke argument doesn't necessarily apply.

  24. Are lab safety violations research misconduct?

    Finding a researcher guilty of violating lab safety rules, as with allforms of research misconduct, requires a preponderance of evidence regarding intent, Tang says. For instance, investigators ...

  25. Pay researchers to spot errors in published papers

    Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward ...