• Exploring Trial and Error Problem Solving Strategies
  • Mind Mapping and Listing Ideas
  • Collaborative Problem Solving Games: Exploring Creative Solutions for Teams
  • Managing Your Finances Effectively
  • Analytical problem solving
  • Identifying root causes
  • Analyzing consequences
  • Brainstorming solutions
  • Heuristic problem solving
  • Using analogies
  • Applying existing solutions
  • Trial and error
  • Creative problem solving
  • Mind mapping
  • Brainstorming
  • Lateral thinking
  • Research skills
  • Interpreting information
  • Data collection and analysis
  • Identifying patterns
  • Critical thinking skills
  • Recognizing bias
  • Analyzing arguments logically
  • Questioning assumptions
  • Communication skills
  • Negotiation and compromise
  • Listening skills
  • Explaining ideas clearly
  • Planning techniques
  • SWOT analysis
  • Gantt charting
  • Critical path analysis
  • Decision making techniques
  • Force field analysis
  • Paired comparison analysis
  • Cost-benefit analysis
  • Root cause analysis
  • Five whys technique
  • Fault tree analysis
  • Cause and effect diagrams
  • Brainstorming techniques
  • Brainwriting
  • Brainwalking
  • Round-robin brainstorming
  • Creative thinking techniques
  • Serendipity technique
  • SCAMPER technique
  • Innovation techniques
  • Value innovation techniques
  • Design thinking techniques
  • Idea generation techniques
  • Personal problems
  • Deciding what career to pursue
  • Managing finances effectively
  • Solving relationship issues
  • Business problems
  • Increasing efficiency and productivity
  • Improving customer service quality
  • Reducing costs and increasing profits
  • Environmental problems
  • Preserving natural resources
  • Reducing air pollution levels
  • Finding sustainable energy sources
  • Individual brainstorming techniques
  • Thinking outside the box
  • Word association and random word generation
  • Mind mapping and listing ideas
  • Group brainstorming techniques
  • Synectics technique
  • Online brainstorming techniques
  • Online whiteboarding tools
  • Virtual brainstorming sessions
  • Collaborative mind mapping software
  • Team activities
  • Group decision making activities
  • Debate activities and role-play scenarios
  • Collaborative problem solving games
  • Creative activities
  • Creative writing exercises and storyboards
  • Imagination activities and brainstorming sessions
  • Visualization activities and drawing exercises
  • Games and puzzles
  • Crossword puzzles and Sudoku
  • Logic puzzles and brain teasers
  • Jigsaw puzzles and mazes
  • Types of decisions
  • Structured decisions
  • Simple decisions
  • Complex decisions
  • Problem solving skills
  • Recognizing Bias: A Problem Solving and Critical Thinking Skills Guide

Learn how to identify and address bias in decision making with our guide to recognizing bias in problem solving and critical thinking.

Recognizing Bias: A Problem Solving and Critical Thinking Skills Guide

In today's world, it is becoming increasingly important to recognize bias and how it can affect our decision-making. Bias can cloud our judgement, lead us to make decisions that are not in our best interests, and limit our ability to solve problems effectively. In this guide, we will explore the concept of recognizing bias and how it can be used as a tool for developing critical thinking and problem-solving skills. We will discuss the various types of biases, why recognizing them is important, and how to identify and counteract them.

Confirmation bias

Cognitive bias.

This type of bias can lead to unfair judgments or decisions. Other common types of bias include cultural bias, which is the tendency to favor one’s own culture or group; and political bias, which is the tendency to favor one’s own political party or beliefs. In order to identify and address bias in oneself and others, it is important to be aware of potential sources of bias. This includes personal opinions, values, and preconceived notions. Being mindful of these potential sources of bias can help us become more aware of our own biases and recognize them in others.

Additionally, it is important to be open-minded and willing to consider alternative perspectives. Additionally, it is helpful to challenge our own assumptions and beliefs by questioning them and seeking out evidence that supports or refutes them. The potential implications of not recognizing or addressing bias are significant. If left unchecked, biases can lead to unfair decisions or judgments, as well as inaccurate conclusions. This can have serious consequences for individuals and organizations alike.

Implications of Not Recognizing or Addressing Bias

Strategies for identifying and addressing bias.

Recognizing bias in oneself and others is an important part of making informed decisions. There are several strategies that can be used to identify and address bias. One of the most effective strategies is to take a step back and look at the situation objectively. This involves examining the facts and assumptions that are being used to make decisions.

It can also involve assessing the potential impact of decisions on multiple stakeholders. By removing personal biases from the equation, it is possible to make more informed decisions. Another important strategy for identifying and addressing bias is to question the sources of information. It is important to consider the credibility of sources, as well as any potential biases that may be present.

Fact-checking sources and considering multiple perspectives can help identify any potential biases in the information being used. In addition, it is important to remain aware of our own biases. We all have preconceived notions about certain topics that can affect our decision-making process. By being mindful of our biases, we can avoid making decisions that are influenced by them. Finally, it is important to be open to other perspectives and willing to engage in meaningful dialogue with others.

Types of Bias

Halo effect, what is bias.

It can be an unconscious preference that influences decision making and can lead to adverse outcomes. It is important to recognize bias because it can have a negative impact on our ability to make sound decisions and engage in problem solving and critical thinking. Bias can manifest itself in various ways, from subtle mental shortcuts to overt prejudices. Types of bias include confirmation bias, where we seek out information that confirms our existing beliefs; availability bias, where we base decisions on the information that is most readily available; and representativeness bias, where we assume that two events or objects are related because they share similar characteristics. Other forms of bias include halo effect, where a single positive quality or trait can influence the perception of an entire person; and stereotyping, which is the tendency to make judgments about individuals based on their perceived membership in a certain group. It is important to recognize bias in ourselves and others so that we can make informed decisions and engage in problem solving and critical thinking.

Sources of Bias

Bias can have a profound effect on decisions, leading to outcomes that are not based on facts or evidence. Personal opinions and values can lead to biased decision-making. They can be shaped by past experiences, cultural background , and other personal factors. For example, someone's opinion about a certain topic may be based on what they have previously heard or read. Similarly, preconceived notions can also lead to biased conclusions. Cultural norms can also play a role in creating bias.

For instance, people may be more likely to believe information from a source they trust or respect, even if it is not based on fact. Similarly, people may be more likely to make decisions that conform to the expectations of their culture or society. In addition, people can also be influenced by their own prejudices or stereotypes. This type of bias can lead to unfair treatment of certain individuals or groups of people. Finally, it is important to be aware of the potential for confirmation bias, where people will seek out information that confirms their existing beliefs and disregard any contradictory evidence. By recognizing and understanding these sources of bias, people can make more informed decisions and engage in more effective problem solving and critical thinking.

In conclusion, recognizing and addressing bias is an essential part of problem solving and critical thinking. Bias can come from many sources, including our own beliefs, cultural norms, and past experiences. Knowing the types of bias and strategies for identifying and addressing them can help us make informed decisions and better engage in critical thinking. Taking time to reflect on our own biases is also important for making unbiased decisions.

Ultimately, recognizing and addressing bias will improve our problem-solving and critical thinking skills.

Critical Path Analysis: A Comprehensive Guide

  • Critical Path Analysis: A Comprehensive Guide

Learn all about critical path analysis and how to use it as a problem solving and planning tool. This comprehensive guide covers everything from introduction to conclusion.

Preserving Natural Resources

  • Preserving Natural Resources

Learn how to protect natural resources and preserve the environment. Understand the importance of sustainable resource management and how it can help protect our planet.

Gantt Charting: A Primer for Problem Solving & Planning Techniques

  • Gantt Charting: A Primer for Problem Solving & Planning Techniques

Learn about Gantt Charting, a powerful tool for problem solving and planning techniques, with this easy to understand primer.

Reducing Costs and Increasing Profits: A Problem Solving Example

  • Reducing Costs and Increasing Profits: A Problem Solving Example

This article provides an example of how to reduce costs and increase profits. Discover tips and strategies to improve business performance.

  • How to Explain Ideas Clearly
  • Cost-benefit Analysis: A Guide to Making Informed Decisions
  • Reducing Air Pollution Levels
  • Negotiation and Compromise
  • Mind Mapping: A Creative Problem Solving Tool
  • Round-robin Brainstorming: A Creative Problem Solving Tool
  • Exploring Synectics Technique: A Comprehensive Guide
  • Brainstorming Solutions: A Problem-Solving Guide
  • Identifying Root Causes
  • Mind Mapping - Creative Problem Solving and Creative Thinking Techniques
  • SWOT Analysis: A Comprehensive Overview
  • Jigsaw Puzzles and Mazes: Problem Solving Activities for Fun and Learning
  • Cause and Effect Diagrams: A Problem-Solving Technique
  • Listening Skills: A Comprehensive Overview
  • Exploring Lateral Thinking: A Comprehensive Guide to Problem Solving Strategies
  • Finding Sustainable Energy Sources
  • Simple Decisions - An Overview
  • Collaborative Mind Mapping Software
  • Crossword Puzzles and Sudoku: A Problem-Solving Exploration

Design Thinking Techniques: A Comprehensive Overview

  • Using Analogies to Solve Problems
  • Five Whys Technique: A Comprehensive Analysis
  • Virtual Brainstorming Sessions: A Comprehensive Overview
  • Questioning Assumptions: A Critical Thinking Skill
  • Exploring Online Whiteboarding Tools for Brainstorming
  • Idea Generation Techniques: A Comprehensive Overview
  • Improving Customer Service Quality
  • Exploring Brainwalking: A Creative Problem-Solving Technique
  • Brainstorming: A Comprehensive Look at Creative Problem Solving
  • Analyzing Arguments Logically
  • Round-robin brainstorming: Exploring a Group Brainstorming Technique
  • Identifying Patterns: A Practical Guide
  • Force Field Analysis for Problem Solving and Decision Making
  • Value Innovation Techniques
  • Exploring the SCAMPER Technique for Creative Problem Solving
  • Thinking Outside the Box: An Overview of Individual Brainstorming Techniques
  • Data Collection and Analysis - Problem Solving Skills and Research Skills
  • Brainwriting: A Creative Problem-Solving Technique
  • Solving Relationship Issues
  • Maximizing Efficiency and Productivity
  • Exploring the Serendipity Technique of Creative Problem Solving
  • Choosing the Right Career: Problem-Solving Examples
  • Creative Writing Exercises and Storyboards
  • Analyzing Consequences: A Problem Solving Strategy
  • Brainwriting: A Group Brainstorming Technique
  • Paired Comparison Analysis: A Comprehensive Overview
  • Debate Activities and Role-Play Scenarios

Logic Puzzles and Brain Teasers: A Comprehensive Overview

  • Applying Existing Solutions for Problem Solving Strategies
  • Imagination Activities and Brainstorming Sessions
  • Visualization Activities and Drawing Exercises
  • Structured Decisions: An Overview of the Decision Making Process

Interpreting Information: A Problem-Solving and Research Skills Primer

  • Fault Tree Analysis: A Comprehensive Overview
  • Making Complex Decisions: A Comprehensive Overview
  • Word Association and Random Word Generation
  • Group Decision Making Activities

New Articles

Interpreting Information: A Problem-Solving and Research Skills Primer

Which cookies do you want to accept?

2.2 Overcoming Cognitive Biases and Engaging in Critical Reflection

Learning objectives.

By the end of this section, you will be able to:

  • Label the conditions that make critical thinking possible.
  • Classify and describe cognitive biases.
  • Apply critical reflection strategies to resist cognitive biases.

To resist the potential pitfalls of cognitive biases, we have taken some time to recognize why we fall prey to them. Now we need to understand how to resist easy, automatic, and error-prone thinking in favor of more reflective, critical thinking.

Critical Reflection and Metacognition

To promote good critical thinking, put yourself in a frame of mind that allows critical reflection. Recall from the previous section that rational thinking requires effort and takes longer. However, it will likely result in more accurate thinking and decision-making. As a result, reflective thought can be a valuable tool in correcting cognitive biases. The critical aspect of critical reflection involves a willingness to be skeptical of your own beliefs, your gut reactions, and your intuitions. Additionally, the critical aspect engages in a more analytic approach to the problem or situation you are considering. You should assess the facts, consider the evidence, try to employ logic, and resist the quick, immediate, and likely conclusion you want to draw. By reflecting critically on your own thinking, you can become aware of the natural tendency for your mind to slide into mental shortcuts.

This process of critical reflection is often called metacognition in the literature of pedagogy and psychology. Metacognition means thinking about thinking and involves the kind of self-awareness that engages higher-order thinking skills. Cognition, or the way we typically engage with the world around us, is first-order thinking, while metacognition is higher-order thinking. From a metacognitive frame, we can critically assess our thought process, become skeptical of our gut reactions and intuitions, and reconsider our cognitive tendencies and biases.

To improve metacognition and critical reflection, we need to encourage the kind of self-aware, conscious, and effortful attention that may feel unnatural and may be tiring. Typical activities associated with metacognition include checking, planning, selecting, inferring, self-interrogating, interpreting an ongoing experience, and making judgments about what one does and does not know (Hackner, Dunlosky, and Graesser 1998). By practicing metacognitive behaviors, you are preparing yourself to engage in the kind of rational, abstract thought that will be required for philosophy.

Good study habits, including managing your workspace, giving yourself plenty of time, and working through a checklist, can promote metacognition. When you feel stressed out or pressed for time, you are more likely to make quick decisions that lead to error. Stress and lack of time also discourage critical reflection because they rob your brain of the resources necessary to engage in rational, attention-filled thought. By contrast, when you relax and give yourself time to think through problems, you will be clearer, more thoughtful, and less likely to rush to the first conclusion that leaps to mind. Similarly, background noise, distracting activity, and interruptions will prevent you from paying attention. You can use this checklist to try to encourage metacognition when you study:

  • Check your work.
  • Plan ahead.
  • Select the most useful material.
  • Infer from your past grades to focus on what you need to study.
  • Ask yourself how well you understand the concepts.
  • Check your weaknesses.
  • Assess whether you are following the arguments and claims you are working on.

Cognitive Biases

In this section, we will examine some of the most common cognitive biases so that you can be aware of traps in thought that can lead you astray. Cognitive biases are closely related to informal fallacies. Both fallacies and biases provide examples of the ways we make errors in reasoning.

Connections

See the chapter on logic and reasoning for an in-depth exploration of informal fallacies.

Watch the video to orient yourself before reading the text that follows.

Cognitive Biases 101, with Peter Bauman

Confirmation bias.

One of the most common cognitive biases is confirmation bias , which is the tendency to search for, interpret, favor, and recall information that confirms or supports your prior beliefs. Like all cognitive biases, confirmation bias serves an important function. For instance, one of the most reliable forms of confirmation bias is the belief in our shared reality. Suppose it is raining. When you first hear the patter of raindrops on your roof or window, you may think it is raining. You then look for additional signs to confirm your conclusion, and when you look out the window, you see rain falling and puddles of water accumulating. Most likely, you will not be looking for irrelevant or contradictory information. You will be looking for information that confirms your belief that it is raining. Thus, you can see how confirmation bias—based on the idea that the world does not change dramatically over time—is an important tool for navigating in our environment.

Unfortunately, as with most heuristics, we tend to apply this sort of thinking inappropriately. One example that has recently received a lot of attention is the way in which confirmation bias has increased political polarization. When searching for information on the internet about an event or topic, most people look for information that confirms their prior beliefs rather than what undercuts them. The pervasive presence of social media in our lives is exacerbating the effects of confirmation bias since the computer algorithms used by social media platforms steer people toward content that reinforces their current beliefs and predispositions. These multimedia tools are especially problematic when our beliefs are incorrect (for example, they contradict scientific knowledge) or antisocial (for example, they support violent or illegal behavior). Thus, social media and the internet have created a situation in which confirmation bias can be “turbocharged” in ways that are destructive for society.

Confirmation bias is a result of the brain’s limited ability to process information. Peter Wason (1960) conducted early experiments identifying this kind of bias. He asked subjects to identify the rule that applies to a sequence of numbers—for instance, 2, 4, 8. Subjects were told to generate examples to test their hypothesis. What he found is that once a subject settled on a particular hypothesis, they were much more likely to select examples that confirmed their hypothesis rather than negated it. As a result, they were unable to identify the real rule (any ascending sequence of numbers) and failed to “falsify” their initial assumptions. Falsification is an important tool in the scientist’s toolkit when they are testing hypotheses and is an effective way to avoid confirmation bias.

In philosophy, you will be presented with different arguments on issues, such as the nature of the mind or the best way to act in a given situation. You should take your time to reason through these issues carefully and consider alternative views. What you believe to be the case may be right, but you may also fall into the trap of confirmation bias, seeing confirming evidence as better and more convincing than evidence that calls your beliefs into question.

Anchoring Bias

Confirmation bias is closely related to another bias known as anchoring. Anchoring bias refers to our tendency to rely on initial values, prices, or quantities when estimating the actual value, price, or quantity of something. If you are presented with a quantity, even if that number is clearly arbitrary, you will have a hard discounting it in your subsequent calculations; the initial value “anchors” subsequent estimates. For instance, Tversky and Kahneman (1974) reported an experiment in which subjects were asked to estimate the number of African nations in the United Nations. First, the experimenters spun a wheel of fortune in front of the subjects that produced a random number between 0 and 100. Let’s say the wheel landed on 79. Subjects were asked whether the number of nations was higher or lower than the random number. Subjects were then asked to estimate the real number of nations. Even though the initial anchoring value was random, people in the study found it difficult to deviate far from that number. For subjects receiving an initial value of 10, the median estimate of nations was 25, while for subjects receiving an initial value of 65, the median estimate was 45.

In the same paper, Tversky and Kahneman described the way that anchoring bias interferes with statistical reasoning. In a number of scenarios, subjects made irrational judgments about statistics because of the way the question was phrased (i.e., they were tricked when an anchor was inserted into the question). Instead of expending the cognitive energy needed to solve the statistical problem, subjects were much more likely to “go with their gut,” or think intuitively. That type of reasoning generates anchoring bias. When you do philosophy, you will be confronted with some formal and abstract problems that will challenge you to engage in thinking that feels difficult and unnatural. Resist the urge to latch on to the first thought that jumps into your head, and try to think the problem through with all the cognitive resources at your disposal.

Availability Heuristic

The availability heuristic refers to the tendency to evaluate new information based on the most recent or most easily recalled examples. The availability heuristic occurs when people take easily remembered instances as being more representative than they objectively are (i.e., based on statistical probabilities). In very simple situations, the availability of instances is a good guide to judgments. Suppose you are wondering whether you should plan for rain. It may make sense to anticipate rain if it has been raining a lot in the last few days since weather patterns tend to linger in most climates. More generally, scenarios that are well-known to us, dramatic, recent, or easy to imagine are more available for retrieval from memory. Therefore, if we easily remember an instance or scenario, we may incorrectly think that the chances are high that the scenario will be repeated. For instance, people in the United States estimate the probability of dying by violent crime or terrorism much more highly than they ought to. In fact, these are extremely rare occurrences compared to death by heart disease, cancer, or car accidents. But stories of violent crime and terrorism are prominent in the news media and fiction. Because these vivid stories are dramatic and easily recalled, we have a skewed view of how frequently violent crime occurs.

Another more loosely defined category of cognitive bias is the tendency for human beings to align themselves with groups with whom they share values and practices. The tendency toward tribalism is an evolutionary advantage for social creatures like human beings. By forming groups to share knowledge and distribute work, we are much more likely to survive. Not surprisingly, human beings with pro-social behaviors persist in the population at higher rates than human beings with antisocial tendencies. Pro-social behaviors, however, go beyond wanting to communicate and align ourselves with other human beings; we also tend to see outsiders as a threat. As a result, tribalistic tendencies both reinforce allegiances among in-group members and increase animosity toward out-group members.

Tribal thinking makes it hard for us to objectively evaluate information that either aligns with or contradicts the beliefs held by our group or tribe. This effect can be demonstrated even when in-group membership is not real or is based on some superficial feature of the person—for instance, the way they look or an article of clothing they are wearing. A related bias is called the bandwagon fallacy . The bandwagon fallacy can lead you to conclude that you ought to do something or believe something because many other people do or believe the same thing. While other people can provide guidance, they are not always reliable. Furthermore, just because many people believe something doesn’t make it true. Watch the video below to improve your “tribal literacy” and understand the dangers of this type of thinking.

The Dangers of Tribalism, Kevin deLaplante

Sunk cost fallacy.

Sunk costs refer to the time, energy, money, or other costs that have been paid in the past. These costs are “sunk” because they cannot be recovered. The sunk cost fallacy is thinking that attaches a value to things in which you have already invested resources that is greater than the value those things have today. Human beings have a natural tendency to hang on to whatever they invest in and are loath to give something up even after it has been proven to be a liability. For example, a person may have sunk a lot of money into a business over time, and the business may clearly be failing. Nonetheless, the businessperson will be reluctant to close shop or sell the business because of the time, money, and emotional energy they have spent on the venture. This is the behavior of “throwing good money after bad” by continuing to irrationally invest in something that has lost its worth because of emotional attachment to the failed enterprise. People will engage in this kind of behavior in all kinds of situations and may continue a friendship, a job, or a marriage for the same reason—they don’t want to lose their investment even when they are clearly headed for failure and ought to cut their losses.

A similar type of faulty reasoning leads to the gambler’s fallacy , in which a person reasons that future chance events will be more likely if they have not happened recently. For instance, if I flip a coin many times in a row, I may get a string of heads. But even if I flip several heads in a row, that does not make it more likely I will flip tails on the next coin flip. Each coin flip is statistically independent, and there is an equal chance of turning up heads or tails. The gambler, like the reasoner from sunk costs, is tied to the past when they should be reasoning about the present and future.

There are important social and evolutionary purposes for past-looking thinking. Sunk-cost thinking keeps parents engaged in the growth and development of their children after they are born. Sunk-cost thinking builds loyalty and affection among friends and family. More generally, a commitment to sunk costs encourages us to engage in long-term projects, and this type of thinking has the evolutionary purpose of fostering culture and community. Nevertheless, it is important to periodically reevaluate our investments in both people and things.

In recent ethical scholarship, there is some debate about how to assess the sunk costs of moral decisions. Consider the case of war. Just-war theory dictates that wars may be justified in cases where the harm imposed on the adversary is proportional to the good gained by the act of defense or deterrence. It may be that, at the start of the war, those costs seemed proportional. But after the war has dragged on for some time, it may seem that the objective cannot be obtained without a greater quantity of harm than had been initially imagined. Should the evaluation of whether a war is justified estimate the total amount of harm done or prospective harm that will be done going forward (Lazar 2018)? Such questions do not have easy answers.

Table 2.1 summarizes these common cognitive biases.

Think Like a Philosopher

As we have seen, cognitive biases are built into the way human beings process information. They are common to us all, and it takes self-awareness and effort to overcome the tendency to fall back on biases. Consider a time when you have fallen prey to one of the five cognitive biases described above. What were the circumstances? Recall your thought process. Were you aware at the time that your thinking was misguided? What were the consequences of succumbing to that cognitive bias?

Write a short paragraph describing how that cognitive bias allowed you to make a decision you now realize was irrational. Then write a second paragraph describing how, with the benefit of time and distance, you would have thought differently about the incident that triggered the bias. Use the tools of critical reflection and metacognition to improve your approach to this situation. What might have been the consequences of behaving differently? Finally, write a short conclusion describing what lesson you take from reflecting back on this experience. Does it help you understand yourself better? Will you be able to act differently in the future? What steps can you take to avoid cognitive biases in your thinking today?

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/introduction-philosophy/pages/1-introduction
  • Authors: Nathan Smith
  • Publisher/website: OpenStax
  • Book title: Introduction to Philosophy
  • Publication date: Jun 15, 2022
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/introduction-philosophy/pages/1-introduction
  • Section URL: https://openstax.org/books/introduction-philosophy/pages/2-2-overcoming-cognitive-biases-and-engaging-in-critical-reflection

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

  • Library staff
  • Librarian subject liaisons
  • Mission and vision
  • Policies and procedures
  • Location and hours
  • BOOKS & MEDIA
  • Books and eBooks
  • Streaming media
  • Worldcat (Advanced Search)
  • Summon (advanced search)
  • GUIDES & TUTORIALS
  • All research guides
  • Guides by subject
  • Guides by special topic
  • Video tutorials
  • Using library services (Rudisill Library)
  • Academic services
  • Library resources and services
  • Using library services (Lineberger Library)
  • Rudisill Library
  • Asheville Library
  • Lineberger Memorial Library

Service Alert

logo

  • Lenoir-Rhyne Libraries

Critical Thinking and Reasoning

  • Misinformation
  • Lateral Reading

What is Bias?

Recognizing and considering the implication of bias in a source is an important component in critical thinking.

"Bias is a natural inclination for or against an idea, object, group, or individual. It is often learned and is highly dependent on variables like a person’s socioeconomic status, race, ethnicity, educational background, etc. At the individual level, bias can negatively impact someone’s personal and professional relationships; at a societal level, it can lead to unfair persecution of a group, such as the Holocaust and slavery.

A category of biases, known as cognitive biases, are repeated patterns of thinking that can lead to inaccurate or unreasonable conclusions. Cognitive biases may help people make quicker decisions, but those decisions aren’t always accurate. Some common reasons why include flawed  memory , scarce attention, natural limits on the brain’s ability to process information, emotional input, social pressures, and even aging. When assessing research—or even one's own thoughts and behaviors—it’s important to be aware of cognitive biases and attempt to counter their effects whenever possible." From: Psychology Today https://www.psychologytoday.com/us/basics/bias

Confirmation Bias

______________________________________________________________________________________________________________________

  • << Previous: Home
  • Next: Misinformation >>
  • Last Updated: Mar 27, 2024 11:41 AM
  • URL: https://libguides.lr.edu/critical_thinking

Critical thinking

We’ve already established that information can be biased. Now it’s time to look at our own bias.

Studies have shown that we are more likely to accept information when it fits into our existing worldview, a phenomenon known as confirmation or myside bias (for examples see Kappes et al., 2020 ; McCrudden & Barnes, 2016 ; Pilditch & Custers, 2018 ). Wittebols (2019) defines it as a “tendency to be psychologically invested in the familiar and what we believe and less receptive to information that contradicts what we believe” (p. 211). Quite simply, we may reject information that doesn’t support our existing thinking.

This can manifest in a number of ways with Hahn and Harris (2014) suggesting four main behaviours:

  • Searching only for information that supports our held beliefs
  • Failing to critically evaluate information that supports our held beliefs - accepting it at face value - while explaining away or being overly critical of information that might contradict them
  • Becoming set in our thinking, once an opinion has been formed, and deliberately ignoring any new information on the topic
  • A tendency to be overconfident with the validity of our held beliefs.

Peters (2020) also suggests that we’re more likely to remember information that supports our way of thinking, further cementing our bias. Taken together, the research suggests that bias has a huge impact on the way we think. To learn more about how and why bias can impact our everyday thinking, watch this short video.

Filter bubbles and echo chambers

The theory of filter bubbles emerged in 2011, proposed by an Internet activist, Eli Pariser. He defined it as “your own personal unique world of information that you live in online” ( Pariser, 2011, 4:21 ). At the time that Pariser proposed the filter bubble theory, he focused on the impact of algorithms, connected with social media platforms and search engines, which prioritised content and personalised results based on the individuals past online activity, suggesting “the Internet is showing us what it thinks we want to see, but not necessarily what we should see” (Pariser, 2011, 3:47. Watch his TED talk if you’d like to know more).

Our understanding of filter bubbles has now expanded to recognise that individuals also select and create their own filter bubbles. This happens when you seek out likeminded individuals or sources; follow your friends or people you admire on social media; people that you’re likely to share common beliefs, points-of-view, and interests with. Barack Obama (2017) addressed the concept of filter bubbles in his presidential farewell address:

For too many of us it’s become safer to retreat into our own bubbles, whether in our neighbourhoods, or on college campuses, or places of worship, or especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions… Increasingly we become so secure in our bubbles that we start accepting only information, whether it’s true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there. ( Obama, 2017, 22:57 ).

Filter bubbles are not unique to the social media age. Previously, the term echo chamber was used to describe the same phenomenon in the news media where different channels exist, catering to different points of view. Within an echo chamber, people are able to seek out information that supports their existing beliefs, without encountering information that might challenge, contradict or oppose.

Other forms of bias

There are many different ways in which bias can affect the way you think and how you process new information. Try the quiz below to discover some additional forms of bias, or check out Buzzfeed’s 2017 article on cognitive bias.

Critical Thinking About Sources

  • Start Here!
  • Research Process
  • Information Ethics
  • Recognizing Bias
  • Reference Resources
  • Citation Impact Factor (Faculty)
  • Learn about CCOW This link opens in a new window
  • Artificial Intelligence (AI)
  • Primary & Secondary
  • Ask a Librarian

What is Bias?

From Cambridge Dictionary of Sociology: Bias refers to those aspects of the social research process that may skew the findings in some way. The main identified sources of bias concern the researcher or informant, the measurement instruments or methods, and the sampling procedures. Biased measures fail to do a good job of measuring the things they are purported to measure and therefore lack validity. Biased samples are not representative of the relevant population or set of cases they are meant to reflect.

There are hundreds of different types of bias that have been identified. These different categories of bias have multiple bias examples within them. Some commonly recognized types of bias are:

  • Cognitive bias
  • Contextual bias
  • Unconscious or implicit bias
  • Statistical bias

Cognitive Bias

What is cognitive bias.

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world.

20 Cognitive Biases that screw up your decisions - from Business Insider

  • Anchoring bias: People are over-reliant on the first piece of information they hear. In a salary negotiation, whoever makes the first offer establishes a range of reasonable possibilities in each person's mind.
  • Availability heuristic: People overestimate the importance of the information available to them. A person might argue that smoking is not unhealthy because they know someone who lived to 100 and smoked three packs a day.
  • Bandwagon Effect: The probability of one person adopting a belief increases based on the number of people who hold that belief. This is a powerful form of groupthink and is reason why meetings are often unproductive.
  • Blind-spot bias: Failing to recognize your own cognitive bias is a bias in itself. People notice cognitive and motivational biases much more in other than in themselves.
  • Choice-supportive bias: When you choose something, you tend to feel positive about it, even if that choice has flaws. Like how you think your dog is awesome - even if it bites people every once in a while.
  • Clustering illusion: This is the tendency to see patterns in random events. It is key to various gambling fallacies, like the idea that red is more or less likely to turn up on a roulette table after a string of reds.
  • Confirmation bias: We tend to listen only to information that confirms our preconceptions - one of the many reasons it's so hard to have an intelligent conversation about climate change.
  • Conservatism bias: Where people favor prior evidence over new evidence or information that has emerged. People were slow to accept that the earth was round because they maintained their earlier understanding that the planet was flat.
  • Information bias: The tendency to seek information when it does not affect action. More information is not always better. With less information, people can often make more accurate predictions.
  • Ostrich effect: The decision to ignore dangerous or negative information by "burying" one's head in the sand, like an ostrich. Research suggests that investors check the value of their holdings significantly less often during bad markets.
  • Outcome bias: Judging a decision based on the outcome - rather than how exactly the decision was made in the moment. Just because you won a lot in Vegas doesn't mean gambling your money was a smart decision.
  • Overconfidence: Some of us are too confident about our abilities, and this causes us to take greater risks in our daily lives. Experts are more prone to this bias than laypeople, since they are more convinced that they are right.
  • Placebo effect: When simply believing that something will have a certain effect on you causes it to have that effect. In medicine, people given fake pills often experience the same physiological effects as people given the real thing.
  • Pro-innovation bias: When a proponent of an innovation tends to overvalue its usefulness and undervalue its limitations. Sound familiar, Silicon Valley?
  • Recency: The tendency to weight the latest information more heavily than older data. Investors often think the market will always look the way it looks today and make unwise decisions.
  • Salience: Our tendency to focus on the most easily recognizable features of a person or concept. When you think about dying, you might worry about being mauled by a lion, as opposed to what is statistically more likely, like dying in a car accident.
  • Selective perception: Allowing our expectations to influence how we perceive the world. An experiment involving a football game between students from two universities showed that one team saw the opposing team commit more infractions.
  • Stereotyping: Expecting a group or person to have certain qualities without having real information about the person. It allows us to quickly identify strangers as friends or enemies, but people tend to overuse and abuse it.
  • Survivorship bias: An error that comes from focusing only on surviving examples, causing us to misjudge a situation. For instance, we might think that being an entrepreneur is easy because we haven't heard of all those who failed.
  • Zero-risk bias: Sociologists have found that we love certainty - even if it's counterproductive. Eliminating risk entirely means there is no chance of harm being caused.

20 types of cognitive bias chart by business insider

  • << Previous: Information Ethics
  • Next: Evaluate Sources >>
  • Last Updated: Feb 26, 2024 12:52 PM
  • URL: https://molloy.libguides.com/criticalthinking

Contextual Debiasing and Critical Thinking: Reasons for Optimism

  • Published: 26 April 2016
  • Volume 37 , pages 103–111, ( 2018 )

Cite this article

  • Vasco Correia 1  

1555 Accesses

8 Citations

19 Altmetric

Explore all metrics

In this article I argue that most biases in argumentation and decision-making can and should be counteracted. Although biases can prove beneficial in certain contexts, I contend that they are generally maladaptive and need correction. Yet critical thinking alone seems insufficient to mitigate biases in everyday contexts. I develop a contextualist approach, according to which cognitive debiasing strategies need to be supplemented by extra-psychic devices that rely on social and environmental constraints in order to promote rational reasoning. Finally, I examine several examples of contextual debiasing strategies and show how they can contribute to enhance critical thinking at a cognitive level.

Similar content being viewed by others

recognizing bias requires critical thinking

Accountability Breeds Response-Ability: Contextual Debiasing and Accountability in Argumentation

recognizing bias requires critical thinking

Wise Reasoning in an Uncertain World

recognizing bias requires critical thinking

Critical Thinking and Epistemic Responsibility Revisited

Surajit Barua

Avoid common mistakes on your manuscript.

1 Introduction

Is critical thinking as ineffective and unreliable in preventing biases as some authors seem to suggest (Mercier and Sperber 2011 ; Paluk and Green 2009 ; Willingham 2007 ; Wilson et al. 2002 )? This claim is supported by a number of empirical studies demonstrating that people’s cognitive and affective biases are persistently immune to critical thinking in its various forms. Hence the suggestion that the teaching of critical thinking skills “does not seem to yield very good results” (Mercier and Sperber 2011 , p. 65), or that critical thinking programs only brought about “modest benefits” (Willingham 2007 , p. 8). Even when critical thinking involves actively teaching people about biases and telling them not to be influenced by them, it is argued that critical thinking “is not as effective as one might hope” (Kenyon and Beaulac 2014 : 343), and perhaps “absolutely worthless” (Arkes 1981 , p. 326). What is worse, it appears that debiasing strategies can backfire and end up amplifying, rather than reducing, people’s biases (Galinsky et al. 2000 ; Sanna et al. 2002 ). One implication of this, it is claimed, is that those who attempt to debias themselves could end up being even more biased than their peers, not least since they are all the more prone to the illusion of open-mindedness. After all, as Fischhoff ( 1982 , p. 431) observes, “a debiasing procedure may be more trouble than it is worth if it increases people’s faith in their judgmental abilities more than it improves the abilities themselves”. And finally, there are those who contend that biases are adaptive mechanisms that only appear to be “irrational” in light of unrealistic standards of rationality (Cohen 1981 ; Stein 1996 ; Oaksford and Chater 2009 ; Gigerenzer 2008 ; Stich 1990 ). In light of these considerations, it may be tempting to jump to the conclusion that it is useless to teach critical thinking to students and to attempt to develop debiasing techniques.

In this paper I challenge most of these assumptions by arguing that inferential and judgmental biases can and should be mitigated. I begin by reviewing the so-called Great Rationality Debate, and bring forth several arguments in support of the view that most biases are overall maladaptive, although I concede that some biases may prove adaptive in some contexts (Sect.  2 ). However, the claim that we should prevent biases only makes sense under the assumption that debiasing can be achieved. This is why it makes sense to inquire, in Sect.  3 , whether debiasing strategies used to promote critical thinking are as ineffective as some suggest. I review some of the studies conducted on debiasing techniques, and show that the evidence is more mixed than the critics pretend. Rather than a coherent and unambiguous set of experiments that would either confirm or infirm the efficacy of debiasing in general terms, as Arkes ( 1991 , p. 496) aptly points out, “[t]he debiasing literature currently contains a desultory catalogue of techniques that work, techniques that do not work, and techniques that work on some tasks but not on others”.

In Sect.  4 , I argue that purely cognitive debiasing strategies , which directly seek to improve the way people reason, should nevertheless be supplemented by contextual debiasing strategies , which seek to promote critical thinking by using aspects of the environment. According to the contextualist approach, extra-psychic (or environmental) devices are more effective against biases than critical thinking alone, insofar as they rely on external constraints and social structures that counteract the main causes of bias persistence, namely: people’s unawareness of their own biases, cognitive limitations, lack of motivation to debias, and inadequate correction (Lilienfeld et al. 2009 ; Wilson et al. 2002 ). This is not to say, however, that intra-psychic (or cognitive) debiasing strategies are utterly ineffective in preventing biases, but more exactly that there should be a complementarity between the two approaches.

In Sect.  5 , I go one step further and argue that some forms of contextual debiasing actually tend to reinforce critical thinking and cognitive change. To show this, I introduce a distinction between two types of contextual strategies: the ones that merely reduce the effects of a given bias ( situational correction ), and the ones that entail a reduction of the bias itself ( dispositional correction ). My suggestion is that the latter devices are likely to foster critical thinking skills and dispositions at the level of the individual, albeit indirectly. I substantiate this claim by examining several examples of contextual devices that effectively bring about cognitive improvements. This approach may be seen as complementary to other contextualist accounts, and particularly the one developed by Kenyon and Beaulac ( 2014 ) in a recent article.

2 The Great Rationality Debate

Empirical research on biases during the last decades gave rise to a discussion on rationality that Stanovich ( 2011 , p. 6) later dubbed “The Great Rationality Debate”. On the one hand, the so-called Meliorists contend that human reasoning is not as good as it could be, but could be improved. Footnote 1 Panglossians , on the other hand, question the very existence of irrationality as it is generally conceptualized, as well as the empirical methods used to demonstrate it (Cohen 1981 ; Stein 1996 ; Oaksford and Chater 2009 ). And the Apologists suggest that the so-called “errors” in reasoning and decision can be adaptive mechanisms, given people’s cognitive limitations, and that we should adopt intuitive strategies well adapted to real-life contexts, rather than optimal standards of rationality (Gigerenzer and Todd 2000 ; Gigerenzer 2008 ; Stich 1990 ).

Part of the problem, as Ainslie ( 2005 , p. 645) points out, is that “Rationality is an elusive concept”, inasmuch as there is no consensual criterion of what is rational or not. Furthermore, the notion of rationality can be discussed at different levels and applied to different types of normativity. First, there is a difference to be made between instrumental and epistemic rationality. Some authors accept, for example, that it may be rational to indulge in wishful thinking and self-deception from a utilitarian standpoint, although this type of attitude is generally deemed irrational from an epistemic standpoint (Engel 2000 ; Davidson 1985 ). Second, there are also competing standards of instrumental rationality, namely: frequentist versus Bayesian models, on the one hand; and optimal versus ecological (or bounded) models, on the other hand. And third, as Oaksford and Chater ( 2009 , p. 110) observe, there may be in principle as many types of rational norms as there are different types of goals, contexts and constraints:

Note, too, that rational analysis is goal-relative: it specifies how best to achieve a given goal, in a given environment, with given constraints. So, if your goal is to land a rocket on the moon, your guidance system ought to respect classical physics; if your goal is to avoid contradictions, you ought to reason according to standard logic; and if your goal is to avoid accepting bets that you are bound to lose, you ought to follow the rules of probability theory.

Bearing this in mind, some authors suggest that cognitive and motivational biases are perhaps not as “irrational” as one might expect. McKay and Dennett ( 2009 , p. 498) argue that some types of misbeliefs—namely, positive illusions—are overall “adaptive” from an evolutionary standpoint. In a similar vein, Taylor and Brown ( 1988 ) claim that positive illusions tend to enhance people’s well-being and mental health. Self-serving evaluations, illusions of control and unrealistic optimism, they argue, demonstrably contribute to people’s motivation, mood, creativity, productivity, persistence, performance and even the ability to care for others ( id ., p. 205). Likewise, Gigerenzer ( 2008 ), Elster ( 2007 ) and other proponents of the bounded rationality approach highlight the benefits of some biases and their underlying heuristics in the process of decision-making, particularly under constraints of time and information in uncertain environments. Given the uncertainty that often characterizes the contexts in which we need to reach decisions, and given our cognitive limitations, it is arguably rational to rely on intuitive judgments, heuristics (or “rules of thumb”), and even on gut feelings when making decisions under uncertainty. Rather than aiming for optimal standards of rationality that seek to maximize the agent’s utility in optimal conditions of choice, theories of bounded rationality conceptualize sub-optimal norms and cognitive tools meant to satisfice the agent’s interest in real-life contexts. Biases and heuristics are useful elements of this “toolbox”.

Nevertheless, judgmental and inferential biases can also prove maladaptive in many other contexts. As Dunning ( 2009 , p. 518) points out, “[t]he literature is filled with numerous counterexamples, strewn across business, education, and policy worlds, in which positive illusions prove costly or even disastrous”. For example, the very same positive illusions that seemingly enhance motivation and performance can also prompt risky behaviour in various domains. Unrealistic optimism and overconfidence, in particular, can lead people to take unnecessary risks, mismanage their budgets, ignore red flags, downplay early symptoms of cancer, procrastinate, and neglect useful information. Footnote 2 Similarly, Croskerry et al. ( 2013a ) report a medical case in which a psychiatry resident’s description of the patient led to a framing effect that subsequently affected the diagnosis, eventually resulting in the death of the patient. And there are, of course, many everyday examples of how overconfidence leads people to mismanage their risks and opportunities.

Furthermore, the fact that biases may be beneficial from an instrumental and evolutionary standpoint does not imply that we ought to accept them. As we have noted, biases may be deemed irrational in light of other types of normativity. It is the case from an epistemic standpoint, given that biases entail per definition a distortion of reality, or at least a “deviation” from (what is taken to be) the right standard. But biases may also be deemed illegitimate from an ethical standpoint, insofar as one person’s cognitive illusions may have negative repercussions on other people’s lives. Rawls ( 2000 , p. 54) proposes to distinguish in this sense between the notion of “rationality”, which refers to an epistemic or instrumental norm, and the notion of “reasonableness”, which refers to an ethical norm, and more specifically the requirement to be fair-minded, judicious and able to see other points of view. According to Rawls and other proponents of the “ethics of belief”, at any rate, the realm of morals extends beyond action to reasoning, argumentation and belief. In this view, people ought to be accountable not only for the rationality of their actions, but also for the rationality of their cognitive conduct.

To sum up, my claim amounts to a moderate version of Meliorism: I accept the idea that biases can be adaptive in certain contexts—i.e. rational from an instrumental standpoint—and even the idea that such biases are tolerable and do not need correction. But I also contend that biases can be maladaptive in many other contexts, and that such biases ought to be counteracted through debiasing strategies. Finally, to avoid further ambiguities, I propose to define the notion of debiasing as a strategy (or set of strategies) designed to suppress/mitigate biases, or at least to suppress/mitigate their effects.

3 Mixed Evidence on the Effectiveness of Debiasing

If we accept the idea that it would be useful to counteract a certain number of biases and their effects, the next question is of course whether it is possible to do so. Does debiasing work? Is critical thinking an effective debiasing tool? Some authors appear confident that cognitive illusions can be mitigated by raising awareness of the existence of biases and by teaching people critical thinking. Thagard ( 2011 , p. 160), for one, suggests that “critical thinking can be improved, one hopes, by increasing awareness of the emotional roots of many inferences”. Johnson and Blair ( 2006 , p. 201), in turn, stress the importance of knowing and practicing the correct forms of argumentation as a means to mitigate irrational thinking: “Logic alone is not enough, but awareness of the criteria of good argument, plus practice, plus self-knowledge and knowledge pertinent to the issue—all of these must be integrated into the evaluation of argumentation”.

Moreover, virtue-based approaches to argumentation suggest that training in formal reasoning must be supplemented by the acquisition of good habits of thinking capable of ensuring the rationality of people’s reasoning even when they are not actively trying to be vigilant about biases. Footnote 3 The hope is that a reinforcement of the arguer’s epistemic virtues and skills can prevent, or at least reduce, his or her irrational tendencies. As Hogarth ( 2001 , p. 24) explains, “[o]ver time, and with practice, these new habits will become more automatic and less costly to implement. In other words, they should migrate from the deliberate to the tacit system”. In the best-case scenario, the virtues of one’s “analytical thinking” (the so-called System 2) would be progressively incorporated into one’s “intuitive thinking” (or System 1) and produce almost effortlessly a rational and judicious reasoning.

Nevertheless, such optimism about critical thinking has been challenged by a number of authors. In fact, there seems to be a growing consensus that critical thinking is by and large ineffective in preventing cognitive and affective biases. While some authors cautiously warn that critical thinking “is not as effective as one might hope” (Kenyon and Beaulac 2014 , p. 343), or that it can only “claim modest success” (Willingham 2007 , p. 8), others contend that it “does not seem to yield very good results” (Mercier and Sperber 2011 , p. 65), or even that it “has proven to be absolutely worthless” (Arkes 1981 , p. 326).

Several arguments suggest that the confidence with which such statements dismiss the benefits of critical thinking should be tempered. First and foremost, it is worth noting that the question of whether critical thinking is effective or not depends on one’s definition of critical thinking. If critical thinking is understood merely as learning about biases and trying to avoid them by becoming cognitively alert to them—what some call the “naive” or “intuitive” account of critical thinking—the results appear indeed rather disappointing. But if one’s account of critical thinking encompasses the acquisition of epistemic/argumentative virtues and reasoning skills, along with the use of debiasing strategies that take into account people’s cognitive limitations, the results appear arguably more encouraging.

This brings us to the second point: the evidence on the effectiveness of debiasing methods used to foster critical thinking is in fact mixed . Arkes ( 1991 , p. 496) emphasizes this crucial aspect: “[t]he debiasing literature currently contains a desultory catalogue of techniques that work, techniques that do not work, and techniques that work on some tasks but not on others”. Thus, for example, the “consider the opposite” strategy, which involves urging the subjects to consider a range of hypotheses at odds with their standpoint, has been shown to reduce the hindsight bias, the overconfidence effect, the confirmation bias, and anchoring effects (Anderson and Sechler 1986 ; Hirt and Markman 1995 ; Galinsky et al. 2000 ; Lord et al. 1984 ; Mussweiler et al. 2000 ; Wilson et al. 2002 ). Yet, if subjects try to come up with many counterfactual thoughts, rather than just a few, the strategy backfires and consistently increases the hindsight bias (Sanna et al. 2002 ). Similarly, training in formal reasoning and statistical education can contribute to reducing purely cognitive (or “cold”) biases, such as the conjunction fallacy, the belief bias and the sunk cost effect (Fischhoff 2002 ; Larrick 2004 ; Nisbett 1993 ; Stanovich and West 2008 ; Tversky and Kahneman 2008 ), while it seems to have either no effect or the effect of amplifying motivational (or “hot”) biases, such as the overconfidence effect and the confirmation bias (Stanovich 2005 ; Taber and Lodge 2006 ). Footnote 4 A similar ambivalence appears to characterize debiasing strategies that specifically target motivational biases, such as the use of incentives and accountability: Arkes ( 1991 , p. 495) points out that “incentives, which are effective in debiasing strategy-based errors, are ineffective in debiasing psychologically based errors”; and Tetlock ( 2002 , p. 590) warns that the list of biases that are successfully reduced by different forms of accountability “must be followed by a compilation of biases that are either unaffected or even amplified by approximately the same forms of accountability”. Likewise, Lilienfeld et al. ( 2009 , p. 393) highlight the fact that there is simply not enough research yet on some of the most common biases and their correlative debiasing techniques:

When examining the literature on debiasing techniques against confirmation bias, one is struck by three glaring facts: the paucity of research on the topic, the lack of theoretical coherence among differing debiasing techniques, and the decidedly mixed research evidence concerning their efficacy.

Furthermore, there are methodological issues that make it difficult to measure the benefits of debiasing and critical thinking programs in the long term . After all, most empirical studies are one-shot interventions that fail to examine the effects of repeated and systematic debiasing training over long periods of time (Lilienfeld et al. 2009 , p. 395; Willingham 2007 , p. 12). Lilienfeld et al. ( 2009 , p. 394) point out, in particular, that “repeated training may be needed to shift the habit of considering alternative viewpoints from a controlled to an automatic processing mode”. To that extent, it seems plausible that the long-lasting acquisition of at least some cognitive skills and epistemic virtues may pass unnoticed in these evaluations. For similar reasons, there is also a risk that experimental results on the efficacy of debiasing, whether positive or negative, may not always be extrapolated to everyday contexts outside of the laboratory. Paluk and Green ( 2009 , p. 351) vehemently stress this aspect: “Those interested in creating effective prejudice-reduction programs must remain skeptical of the recommendations of laboratory experiments until they are supported by research of the same degree of rigor outside of the laboratory”.

The point to be made here is that it would be simplistic to draw the general conclusion that debiasing is (or is not) effective, when the present state of research points to a more complex picture. At any rate, the evidence on the (in)effectiveness of debiasing strategies is less categorical than it is sometimes suggested. Certain debiasing strategies appear to effectively reduce some types of biases, but not others, while other strategies seem overall ineffective. In other cases still, as many authors point out, more research is needed to assess the effectiveness of debiasing techniques, as well as their applicability outside of the laboratory (Arkes 1991 ; Croskerry et al. 2013b ; Fischhoff 2002 ; Paluk and Green 2009 ).

If this analysis is correct, it seems reasonable to conclude that there are grounds both for optimism and for pessimism regarding the effectiveness of debiasing. As Fischhoff ( 2002 , p. 730) observes, “[b]oth skeptics and rabid enthusiasts can find ad hoc reasons for creating the pictures that they seek regarding the applicability of research”. The crucial question, then, is whether the benefits of developing and teaching debiasing techniques are worth its costs, given the abovementioned uncertainties associated with their applicability. Perhaps unsurprisingly, many meliorists seem to think so (Larrick 2004 ; Croskerry et al. 2013a ; Elster 2007 ; Stanovich 2011 ; Tetlock 2005 ). Wilson et al. ( 2002 , p. 200) explain why: “The challenges of eliminating [biases] are great, but so may be the personal and societal costs of ignoring the problem”. As we will see in the next section, optimism regarding critical thinking appears all the more justified if we consider that cognitive debiasing can be supplemented by a variety of contextual devices.

4 Reasons for Optimism

According to Willingham’s ( 2007 , p. 8) definition, “critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth”. Other definitions also identify some of these aspects as essential components of critical thinking, and particularly (1) open-mindedness, (2) rational thinking, and (3) self-critical evaluation of one’s thinking skills. Footnote 5 Given the uncertainties regarding the effectiveness of methods designed to transmit these skills directly, the question to be asked is whether there are indirect ways of promoting critical thinking that are more likely to counteract biases. More specifically, the idea is that critical thinking could become more effective if it were supplemented by debiasing strategies that rely on extra-psychic, environmental and social structures, rather than merely on cognitive improvements at an individual level. This model is in line with what Thaler and Sunstein ( 2008 ) call a “choice architecture”, which seeks to promote rationality, not by reforming people’s cognitive system, but by imposing constraints on the contexts in which people reason and make decisions.

This approach was firstly proposed by Larrick ( 2004 , p. 318), who dubbed it the Technologist approach to stress the notion that “individual reasoning can approach normative standards through the use of tools”. In his view, cognitive limitations at an individual level can be overcome with the help of external devices such as group interaction, decision aids, formal decision analysis, and statistic models. As the author points out, “[d]ebates about rationality have focused on purely cognitive strategies, obscuring the possibility that the ultimate standard of rationality might be the decision to make use of superior tools (Larrick 2004 , p. 318). Hence the “tools” metaphor, which conveys the idea that critical thinking can be developed beyond the individual’s natural capacities through the use of external devices that rely on social and environmental constraints. Footnote 6 Rather than simply teaching individuals the correct forms of reasoning, warning them about biases and expecting them to apply this information in real-life contexts, it may be useful to set external constraints meant to enforce the practice of critical thinking. After all, as Paul ( 1986 , p. 379) points out, “[i]t is possible to develop extensive skills in argument analysis and construction without ever seriously applying those skills in a self-critical way to one’s own deepest beliefs, values, and convictions”.

Larrick’s Technologist approach could also be labelled Contextualist , in the sense that it proposes to focus on contextual changes rather than (merely) on personal changes . Other authors have also explicitly endorsed this view. Stanovich ( 2011 , p. 8), for example, writes that “in cases where teaching people the correct reasoning strategies is difficult, it may well be easier to change the environment so that decision making errors are less likely to occur”. Soll et al. ( 2015 ), in the same vein, suggest that it is possible to fight biases by resorting to techniques designed to offset the shortcomings of intuitive thinking at an individual level, such as “blinding” (or selective exposure to information), taking an outside view, or even seeking advice. Hogarth ( 2001 , p. 20) also insists at length on the role of environmental variables in the shaping of intuitive thinking, which is particularly manifest in his concept of learning structures: “Some environments have favourable learning structures in that people receive quick and accurate feedback. In other learning structures, feedback can teach people the ‘wrong’ lessons”. Finally, Kenyon and Beaulac ( 2014 ) develop a four-level taxonomy of debiasing strategies, ranging from the most individualistic to the most contextual, and argue that only the latter are significantly effective in mitigating biases.

However, the contextualist approach need not imply that the teaching of critical thinking is unnecessary and that debiasing devices should focus exclusively on modifying the environment rather than the person. On this point, I would disagree with Kenyon and Beaulac ( 2014 , p. 354), who claim almost categorically that critical thinking programs based on the assumption that individuals can learn to debias themselves are bound to fail: “[w]e think this is practically impossible”. They suggest that the ineffectiveness of debiasing devices involving cognitive change (level 1 and Level 2, according to their taxonomy) has been empirically established. Yet, they acknowledge that the relevant evidence is actually mixed: “this is not to say that no debiasing strategies have been shown to work in this literature. A range of strategies work to varying degrees, depending on the bias” ( id. , p. 347). Moreover, the probabilistic argument they bring forward to explain why contextual debiasing is worth trying could also be applied to cognitive debiasing: “Unless the probability of success under our broader construal of debiasing outcomes is literally zero, the addition of this slate of options can only improve the chances of overall success in teaching debiasing skills”. Given the uncertainties associated with the testing of both types of debiasing, why limit this principle of charity to only one of them?

To sum up, my suggestion is that critical thinking alone is insufficient to counteract biased thinking, and that intra-psychic (or cognitive) methods need to be supplemented by the use of extra-psychic (or contextual) ones. As Wilson et al. ( 2002 , p. 192) observe, “just because people attempt to correct a judgment they perceive to be biased is no guarantee that their result will be a more accurate judgment”. That being said, it is one thing to suggest that critical thinking is insufficient , and quite another to dismiss critical thinking as outright useless in that respect. In the next section, I will argue that contextualist tools are plausibly more effective than cognitive devices, insofar as they specifically target the main sources of bias persistence. But I also highlight another aspect of contextualist debiasing that is often neglected by its proponents, namely that it also contributes indirectly to promote critical thinking skills and dispositions at the level of the individual.

5 Contextual Debiasing and Critical Thinking

The advantage of contextual debiasing techniques is that they allow individuals to “outsmart [their] own biases”, to borrow Soll et al.’s ( 2015 ) expression, without having to rely on unrealistic assumptions regarding their cognitive and motivational capacities. Even “cognitive misers”, “positive thinkers” and “akratic believers” should in principle be able to conform to rational standards, provided they accept to submit their thinking to a certain number of external constraints.

This can be achieved in two ways: either the extra-psychic device eliminates/reduces the unwanted bias ( dispositional correction ), or it merely eliminates/reduces its effects ( situational correction ). This distinction is helpful inasmuch as it is indicative of the type of contextual debiasing that is likely to promote critical thinking. If the purpose of a strategy is simply to mitigate the undesired repercussions of a bias, without changing the subject’s cognitive processes, it is clear that the bias will persist, along with the dispositional features that underlie it. A good illustration of this type of strategy is the control of the source of information—also known as blinding —which according to Wilson et al. ( 2002 , p. 192) is the most effective debiasing tool: “A stimulus that never enters our minds cannot bias our judgment or feelings”. The peer review system, for example, shows how effective it can be to limit exposure to information susceptible to influence one’s judgment. Even if the reviewer is racist or misogynous, this tendentiousness will not have an effect on his or her evaluation of the author’s work, since it remains anonymous. Thus, for example, when the journal Behavioural Ecology decided to adopt a peer review process, they found that it led to a 33 per cent increase of representation of female authors (Budden et al. 2008 ). This type of device is characteristic of what Kenyon and Beaulac ( 2014 , p. 353) describe as the most reliable level of debiasing (Level 4), which “tolerate[s] the occurrence of individually manifest biased judgments, but minimize[s] their significance in determining actions or outcomes”.

The problem with this type of method is that it is often difficult to implement. First, as Wilson et al. ( 2002 , p. 195) observe, it is not possible to control exposure to all biasing information: “[w]hen deciding which employees should be promoted, for example, we already know their gender, age, and race”. After all, in most cases individuals do not know in advance whether the available information is potentially biasing. Second, if misused selective exposure may actually amplify the confirmation bias, given that it involves per definition a manipulation of the source of information. It is reported, for example, that the former United States vice-president, Dick Cheney, requested that the television always be tuned to the Fox News Channel before he entered a hotel room (Dick Cheney’s Suite Demands 2006 ). In such cases, as Wilson and Brekke ( 1994 , p. 135) suggest, this strategy might end up “fostering narrow-mindedness, stifling creativity, and inhibiting social change”. But above all, this type of device does not suppress the unwanted bias, but only its immediate repercussions. Although this is a significant gain with respect to fairness of opportunities and social change, it does not constitute a direct improvement with respect to critical thinking. Thus, for example, the fact that groups traditionally discriminated increasingly succeed in getting published can lead, at best, to an indirect reduction of the stereotype, and thereby to fairer assessments of the quality of their work. But selective exposure does nothing to eliminate the bias itself or to prevent it from resurfacing in a less controllable situation. As Kenyon and Beaulac ( 2014 , p. 358) acknowledge, this type of debiasing process “can be entirely arational from the perspective of the agents in the situation”.

Contextual debiasing strategies that involve dispositional correction , on the other hand, are more likely to reduce biases in everyday contexts and effectively promote critical thinking. This type of strategy typically requires an external constraint that has the virtue of “forcing” the subject to detect and correct potential biases in his or her reasoning. From this perspective, perhaps the most promising tool is accountability , which Tetlock ( 2002 , p. 583) defines as “the expectation that one will be called on to justify one’s opinions or conduct to others”. After conducting a series of experiments on the topic, the author maintains that “[t]here is indeed a substantial list of biases that are attenuated, if not eliminated, by certain forms of accountability” ( id ., p. 590). This list includes the biases of overconfidence, primacy, overattribution, illusory correlation, and the fundamental attribution error. The effectiveness of accountability in preventing such biases is presumably due to the fact that it enhances people’s motivation to engage in debiasing efforts. When subjects believe that others will scrutinize their arguments, they are in principle more willing to make sure that those arguments are sound and rational. In one study, for example, it appeared that subjects accountable to unknown audiences tended to engage more in self-criticism and to raise potential objections to their own views, not because they cared more about the truth than those who were not accountable, but because they feared that others might find serious flaws in their reasoning (Tetlock and Boettger 1989 ). Moreover, as Brest and Krieger ( 2010 , p. 628) point out, the motivation to engage in self-scrutiny also works as an incentive to increase people’s awareness of potential biases and to overcome their resistance towards challenging counterarguments: “accountability is likely to reduce error and bias in contexts in which, for whatever reasons, people tend to make mistakes that they could prevent with extra attention or effort”. Accountable thinkers have more to loose than isolated thinkers in terms of self-image and social image, which means that the costs of not being aware of one’s own biases, or not being open to alternative standpoints, are much higher. Tetlock ( 2002 , p. 585) speculates that this motivation to be open-minded is often negative: “To minimize potential embarrassment, subjects demonstrated their awareness of alternative perspectives: ‘You can see that I am no fool. I may believe X, but I understand the arguments for Y’.” Be that as it may, accountable subjects tend to display fewer biases and to consider more alternative hypotheses than their peers.

There are, however, limitations to this strategy, given that accountability may occasionally amplify other types of biases (group polarization, compromise effect, etc.). A person who is accountable to a group with known views, in particular, may end up saying what the others want to hear. For example, a group of feminist women became even more feminist after discussion (Myers 1975 ), and a group of federal judges appointed by Republican presidents tended to show even more conservative voting patterns when sitting only with fellow Republican appointees (Sunstein et al. 2004 ). According to Sunstein, the most important reason for this is that the arguments exchanged by members of a group with some predisposition regarding a particular issue tend to focus uniquely on reasons in favor of that predisposition: “A group whose members tend to think that Israel is the real aggressor in the Middle East conflict will hear many arguments to that effect and relatively fewer opposing views” (Sunstein 2003 , p. 121).

A second contextual strategy that involves cognitive change is group interaction . As Larrick ( 2004 , p. 326) points out, “groups serve as an error-checking system during interaction”. In addition, groups tend to bring forward more diverse perspectives than lone individuals, which is also a way of promoting critical thinking, bearing in mind that the consideration of alternative hypotheses is admittedly one of the most effective debiasing tools (Arkes 1981 ; Larrick 2004 ; Wilson et al. 2002 ; Pronin et al. 2002 ). Furthermore, as Mercier and Sperber ( 2011 , p. 65) explain, groups can contribute to hold the confirmation bias in check, given that each element of the group tends to focus primarily on the arguments that best support his or her view, thereby exposing their contradictors to the best available counterarguments. The drawback of this strategy, however, is that it only works effectively if the members of the group are individuals who hold different views on the topic at stake. Otherwise they may end up reinforcing their shared beliefs by an effect of group polarization.

A final example of contextual debiasing is the use of incentives , which can contribute to mitigate certain biases insofar as they increase people’s motivation to reduce them. Thaler and Sunstein ( 2008 ) promote this type of approach in their book Nudge . Instead of enforcing the desired outcomes by resorting to sanctions or accountability, they argue, it is often more fruitful to implement what they call nudges , i.e. “any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options” ( id ., p. 6). As particular types of nudges, incentives can be useful to overcome people’s lack of motivation to debias. It seems reasonable to assume, in particular, that cognitive misers are more likely to become epistemically vigilant if they have something to win in the process. At any rate, as Lilienfeld et al. ( 2009 , p. 394) report, “research suggests that at least some cognitive biases may be reduced by enhancing participants’ motivation to examine evidence thoughtfully”. That being said, it must be stressed that incentives have no effect on certain biases (e.g. overconfidence, hindsight, framing effects), and can even increase other biases (e.g. representativeness). In fact, while incentives seemingly counteract the problem of lack of motivation to debias, they can exacerbate the problem of inadequate correction. As Pelham and Neter ( 1995 , p. 591) put it: “If the only tool at a person’s disposal is a hammer, convincing the person to work harder can only lead to more vigorous hammering”.

This list is not meant to be exhaustive, and many other strategies of contextual debiasing can be conceived. Footnote 7 The point to be made is that there is a range of contextual techniques that can plausibly reduce certain biases and even enhance critical thinking at the level of the individual. To be sure, as Wilson et al. ( 2002 , p. 200) point out, “these techniques are in their infancy and their reliability, validity, and controllability are still an open question”. The question, however, is not whether debiasing works or not generally speaking, but rather which specific debiasing technique works better in which type of task, and what side-effects it might have.

6 Conclusion

The claim that at least certain biases should be reduced (moderate meliorism) only makes sense if we assume that debiasing methods can be effective. The apparent ineffectiveness of critical thinking has led many authors to believe that debiasing is doomed to fail. While acknowledging that cognitive debiasing alone is insufficient to prevent biases and their effects, I have argued that it is nonetheless effective to a certain extent, and that it can be supplemented by extra-psychic devices that may more reliably “enforce” rational standards. But we have also seen that some types of contextual debiasing, in turn, may contribute indirectly to reinforce critical thinking skills and dispositions. By submitting their thinking to the right social and environmental structures, individuals have a better chance of conforming to the norms of rationality. Granted, each debiasing technique has its shortcomings. Furthermore, as Lilienfeld et al. ( 2009 , p. 394) observe, “[t]he extent to which debiasing efforts may need to be applied to multiple and diverse problems to be efficacious in the long term is unclear”. My suggestion is that these devices can produce significant improvements as a whole .

Despite mounting criticism, Meliorism remains the dominant position among philosophers and psychologists. See for example Elster ( 2007 ), Evans ( 2007 ), Kahneman ( 2011 ), Kenyon and Beaulac ( 2014 ), Stanovich ( 2011 ), Larrick ( 2004 ), Croskerry et al. ( 2013a ), Tetlock ( 2005 ), Wilson and Brekke ( 1994 ).

Taylor ( 1989 , p. 237) explicitly acknowledges this aspect: “Unrealistic optimism might lead people to ignore legitimate risks in their environment and to fail to take measures to offset those risks”.

See, for example, Aberdein ( 2010 ) and Cohen ( 2009 ).

Psychologists distinguish between two kinds of cognitive illusions: motivational (or “hot”) biases, on the one hand, which stem from the influence of emotions and interests on cognitive processes, and cognitive (or “cold”) biases, on the other hand, which stem from inferential errors due to cognitive malfunctioning (Kunda 1990 ; Nisbett 1993 ).

Cf. Fisher ( 2011 , p. 4), Lau ( 2011 , p. 2), Siegel ( 1988 , p. 32).

The “tools” metaphor can also be found in other approaches that stress the importance of non-cognitive (or extra-psychic) devices as means to promote rationality: Soll et al. ( 2015 ) refer to “debiasing tools”, Hogarth ( 2001 ) to “decision-making tools”, Elster ( 1989 ) to the “toolbox of mechanisms”, and Gigerenzer and Selten ( 2002 ) to the “adaptive toolbox”.

See, for example, Kenyon and Beaulac ( 2014 ), Larrick ( 2004 ), Soll et al. ( 2015 ).

Aberdein A (2010) Virtue in argument. Argumentation 24(2):165–179

Article   Google Scholar  

Ainslie G (2005) Précis of breakdown of will. Behav Brain Sci 28:635–673

Google Scholar  

Anderson C, Sechler E (1986) Effects of explanation and counterexplanation on the development and use of social theories. J Pers Soc Psychol 50:24–54

Arkes H (1981) Impediments to accurate clinical judgment and possible ways to minimize their impact. J Consult Clin Psychol 49:323–330

Arkes H (1991) Costs and benefits of judgment errors. Psychol Bull 110(13):486–498

Brest P, Krieger L (2010) Problem solving, decision making and professional judgment. Oxford University Press, Oxford

Budden A, Tregenza T, Aarssen L, Koricheva J, Leimu R, Lortie CJ (2008) Double-blind review favours increased representation of female authors. Trends Ecol Evol 23(1):4–6

Cohen J (1981) Can human irrationality be experimentally demonstrated? In: Adler J, Rips L (eds) Reasoning. Cambridge University Press, Cambridge

Cohen D (2009) Keeping an open mind and having a sense of proportion as virtues in argumentation. Cogency 1(2):49–64

Croskerry P, Singhal G, Mamede S (2013a) Cognitive debiasing 1: origins of bias and theory of debiasing. Qual Saf 22(2):58–64

Croskerry P, Singhal G, Mamede S (2013b) Cognitive debiasing 2: impediments to and strategies for change. Qual Saf 22(2):65–72

Davidson D (1985) Incoherence and irrationality. Dialectica 39(4):345–353

Dick Cheney’s Suite Demands (2006) Retrieved January 8, 2016, from http://www.thesmokinggun.com/file/dick-cheneys-suite-demands

Dunning D (2009) Disbelief and the neglect of environmental context. Behav Brain Sci 32:517–518

Elster J (1989) Nuts and bolts for the social sciences. Cambridge University Press, Cambridge

Book   Google Scholar  

Elster J (2007) Explaining social behavior. Cambridge University Press, Cambridge

Engel P (ed) (2000) Believing and accepting. Kluwer, Dordrecht

Evans J (2007) Hypothetical thinking. Psychology Press, New York

Fischhoff B (1982) Debiasing. In: Kahneman D, Slovic P, Tversky A (eds) Judgment under uncertainty. Cambridge University Press, Cambridge

Fischhoff B (2002) Heuristics and biases in application. In: Gilovich T, Griffin D, Kahneman D (eds) Heuristics and biases. Cambridge University Press, Cambridge

Fisher A (2011) Critical thinking: an introduction. Cambridge University Press, Cambridge

Galinsky A, Moskowitz G, Gordon B (2000) Perspective taking. J Pers Soc Psychol 784:708–724

Gigerenzer G (2008) Rationality for mortals. Oxford University Press, Oxford

Gigerenzer G, Selten R (2002) Bounded rationality. MIT Press, Cambridge

Gigerenzer G, Todd P (2000) Précis of simple heuristics that make us smart. Behav Brain Sci 23:727–780

Hirt E, Markman K (1995) Multiple explanation: a consider-an-alternative strategy for debiasing judgments. J Pers Soc Psychol 69:1069–1086

Hogarth R (2001) Educating intuition. University of Chicago Press, Chicago

Johnson R, Blair A (2006) Logical self-defense. International Debate Association, New York

Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York

Kenyon T, Beaulac G (2014) Critical thinking education and debiasing. Informal Log 34(4):341–363

Kunda Z (1990) The case for motivated reasoning. Psychol Bull 108(3):480–498

Larrick R (2004) Debiasing. In: Koehler D, Harvey N (eds) The Blackwell handbook of judgment and decision making. Blackwell Publishing, Oxford

Lau J (2011) An introduction to critical thinking and creativity. Wiley, New Jersey

Lilienfeld S, Ammirati R, Landfield K (2009) Giving debiasing away. Perspect Psychol Sci 4(4):390–398

Lord G, Lepper R, Preston E (1984) Considering the opposite: a corrective strategy for social judgment. J Pers Soc Psychol 47:1231–1243

McKay R, Dennett D (2009) The evolution of misbelief. Behav Brain Sci 32:493–561

Mercier H, Sperber D (2011) Why do humans reason? Behav Brain Sci 34:57–111

Mussweiler T, Strack F, Pfeiffer T (2000) Overcoming the inevitable anchoring effect. Pers Soc Psychol Bull 26:1142–1150

Myers D (1975) Discussion-induced attitude-polarization. Hum Relat 28:699–714

Nisbett R (ed) (1993) Rules for reasoning. Erlbaum, Hillsdale

Oaksford M, Chater N (2009) Précis of Bayesian Rationality. Behav Brain Sci 32:69–120

Paluk E, Green D (2009) Prejudice reduction: what works? A review and assessment of research and practice. Annu Rev Psychol 60:339–367

Paul W (1986) Critical thinking in the strong and the role of argumentation in everyday life. In: Eemeren F, Grootendorst R, Blair A, Willard C (eds) Argumentation. Foris Publications, Dordrecht

Pelham B, Neter E (1995) The effect of motivation of judgment depends on the difficulty of the judgment. J Pers Soc Psychol 68(4):581–594

Pronin E, Lin D, Ross L (2002) The bias blind spot: perceptions of bias in self versus others. Pers Soc Psychol Bull 28:369–381

Rawls J (2000) Lectures on the history of political philosophy. Harvard University Press, Cambridge

Sanna L, Schwarz N, Stocker S (2002) When debiasing backfires. J Exp Psychol 28:497–502

Siegel H (1988) Educating reason. Routledge, New York

Soll J, Milkman K, Payne J (2015) Outsmart your own biases. Harv Bus Rev 93:65–71

Stanovich K (2005) The robot’s rebellion. The University of Chicago Press, Chicago

Stanovich K (2011) Rationality and the reflective mind. Oxford University Press, New York

Stanovich K, West R (2008) On the relative independence of thinking biases and cognitive ability. J Pers Soc Psyshol 94:672–695

Stein E (1996) Without good reason. Clarendon Press, Oxford

Stich S (1990) The fragmentation of reason. MIT Press, Cambridge

Sunstein C (2003) Why societies need dissent. Harvard University Press, Harvard

Sunstein C, Schkade D, Ellman L (2004) Ideological voting on federal courts of appeal. Va Law Rev 90(1):301–354

Taber C, Lodge M (2006) Motivated skepticism in the evaluation of political beliefs. Am J Polit Sci 50(3):755–769

Taylor S (1989) Positive illusions. Basic Books, New York

Taylor S, Brown J (1988) Illusion and well-being. Psychol Bull 103(2):193–210

Tetlock P (2002) Intuitive politicians, theologians, and prosecutors. In: Gilovich T, Griffin D, Kahneman D (eds) Heuristics and biases. Cambridge University Press, Cambridge

Tetlock P (2005) Expert political judgment. Princeton University Press, Princeton

Tetlock P, Boettger R (1989) Accountability. J Pers Soc Psychol 57:388–398

Thagard P (2011) Critical thinking and informal logic. Informal Log 31(3):152–170

Thaler R, Sunstein C (2008) Nudge. Yale University Press, New Haven

Tversky A, Kahneman D (2008) Extensional versus intuitive reasoning. In: Gilovich T, Griffin D, Kahneman D (eds) Heuristics and biases. Cambridge University Press, Cambridge

Willingham D (2007) Critical thinking: why is it so hard to teach? Am Educ 31(2):8–19

Wilson T, Brekke N (1994) Mental contamination and mental correction. Psychol Bull 116(1):117142

Wilson T, Centerbar D, Brekke N (2002) Mental contamination and the debiasing problem. In: Gilovich T, Griffin D, Kahneman D (eds) Heuristics and biases. Cambridge University Press, Cambridge

Download references

Acknowledgments

I would like to thank the editor and two anonymous reviewers for their constructive comments. Work on this article was conducted under the grant SFRH/BPD/101744/2014 by the “Portuguese Foundation for Science and Technology” (FCT), as part of the project “Values in argumentative discourse” (PTDC/MHC-FIL/0521/2014).

Author information

Authors and affiliations.

ArgLab, IFILNOVA, Nova Institute of Philosophy, Universidade Nova de Lisboa, Av. De Berna 26, 4º piso, 1069-061, Lisbon, Portugal

Vasco Correia

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Vasco Correia .

Rights and permissions

Reprints and permissions

About this article

Correia, V. Contextual Debiasing and Critical Thinking: Reasons for Optimism. Topoi 37 , 103–111 (2018). https://doi.org/10.1007/s11245-016-9388-x

Download citation

Published : 26 April 2016

Issue Date : March 2018

DOI : https://doi.org/10.1007/s11245-016-9388-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Contextualism
  • Critical thinking
  • Rationality
  • Find a journal
  • Publish with us
  • Track your research

BUS403: Negotiations and Conflict Management (2016.A.01)

Recognizing stereotypes and bias.

We have learned that we all stereotype people. We have maybe even extrapolated that we are a stereotype to others. We have also learned that we all have biases but that we typically only recognize bias in others and that some people may be biased for or against us based on a stereotype. We've also seen that stereotyping and biases may be hardwired into our brains, and therefore unavoidable. You might now be thinking, "but I want to judge people rationally, and not based on some silly bias I developed as a child", or "I want to understand the real value of the person across the table from me, rather than some shallow stereotypical judgment". How can we move beyond our own ingrained stereotypes and biases? We can do this by honing our critical thinking skills.

Critical thinking is the ability to think clearly and rationally. It includes the ability to engage in reflective and independent thinking. Someone with critical thinking skills can do the following :

  • understand the logical connections between ideas,
  • identify, construct and evaluate arguments
  • detect inconsistencies and common mistakes in reasoning
  • solve problems systematically
  • identify the relevance and importance of ideas
  • reflect on the justification of one's own beliefs and values

Other definitions of critical thinking have been proposed and argued throughout history.

Critical thinking is not a matter of accumulating information. A person with a good memory who knows many facts is not necessarily good at critical thinking. A critical thinker can deduce consequences from what he knows, and he knows how to make use of information to solve problems and seek relevant sources of information to inform himself.

Critical thinking should not be confused with being argumentative or being critical of other people. Although critical thinking skills can expose fallacies and bad reasoning, critical thinking can also play an important role in cooperative reasoning and constructive tasks. Critical thinking can help us acquire knowledge, improve our theories, and strengthen arguments. We can use critical thinking to enhance work processes and improve social institutions.

Good critical thinking might be seen as the foundation of science and liberal democratic society. Science requires the critical use of reason in experimentation and theory confirmation. The proper functioning of a liberal democracy requires citizens who can think critically about social issues to inform their judgments about proper governance and to overcome biases and prejudice.

How to Develop the Critical Stance

The critical stance is the generalized ability and disposition to apply critical thinking to whatever you encounter, recognizing assumptions -- your own and others' -- applying that recognition to questioning information and situations, and considering their context.

1. Recognize assumptions

Each of us has a set of assumptions – ideas or attitudes or "facts" we take for granted – that underlies our thinking. Only when you're willing to look at these assumptions and realize how they color your conclusions can you examine situations, problems, or issues objectively.

Assumptions are based on many factors – physical, environmental, psychological, and experiential – that we automatically, and often unconsciously, bring to bear on anything we think about. One of the first steps in encouraging the critical stance is to try to make these factors conscious.

Sources of assumptions are numerous and overlapping, but the most important are:

  • Senses. The impact of the senses is so elemental that we sometimes react to it without realizing we're doing so. You may respond to a person based on smells you're barely aware of, for instance.
  • Experience. Each of us has a unique set of experiences, and they influence our responses to what we encounter. Ultimately, as critical thinkers, we have to understand both how past experience might limit our thinking in a situation and how we can use it to see things more clearly.
  • Values. Values are deeply held beliefs – often learned from families, schools, and peers – about how the world should be. These "givens" may be difficult even to recognize, let alone reject. It further complicates matters that values usually concern the core issues of our lives: personal and sexual relationships, morality, gender and social roles, race, social class, and the organization of society, to name just a few.
  • Emotion. Recognizing our emotional reactions is vital to keeping them from influencing our conclusions. Anger at child abusers may get in the way of our understanding the issue clearly, for example. We can't control whether emotions come up, but we can understand how we react to them.
  • Self-interest. Whether we like it or not, each of us sometimes injects what is best for ourselves into our decisions. We have to be aware when self-interest gets in the way of reason or looking at the other interests in the situation.
  • Culture. The culture we grew up in, the culture we've adopted, and the dominant culture in the society – all have their effects on us and push us into thinking in particular ways. Understanding how culture acts upon our and others' thinking makes it possible to look at a problem or issue in a different light.
  • History. Community history, the history of our organization or initiative, and our own history in dealing with particular problems and issues will affect the way we think about the current situation.
  • Religion . Our own religious backgrounds -- whether we still practice religion or not – may be more powerful than we realize in influencing our thinking.
  • Biases. Very few of us, regardless of what we'd like to believe, are free of racial or ethnic prejudices of some sort, or of political, moral, and other biases that can come into play here.
  • Prior knowledge. What we know about a problem or issue from personal experience, secondhand accounts, or theory shapes our responses to it. However, we have to be sure that what we "know" is in fact true and relevant to the issue at hand.
  • Conventional wisdom. All of us have a large store of information "everybody knows" that we apply to new situations and problems. Unfortunately, the fact that everybody knows it doesn't make it right. Conventional wisdom is often too conventional: it usually reflects the simplest way of looking at things. We may need to step outside the conventions to look for new solutions.

This is often the case when people complain that "common sense" makes the solution to a problem obvious. Many people believe, for instance, that it is "common sense" that sex education courses for teens encourage them to have sex. The statistics show that, in fact, teens with adequate sexual information tend to be less sexually active than their uninformed counterparts.

2. Examine information for accuracy, assumptions, biases, or specific interests.

Some basic questions to examine information for accuracy, assumptions, biases, or specific interests are:

  • What's the source of the information? Knowing where the information originates can tell you a lot about what it's meant to make you believe.
  • Does the source generally produce accurate information?
  • What are the source's assumptions about the problem or issue? Does the source have a particular interest or belong to a particular group that will allow you to understand what it believes about the issue the information refers to?
  • Does the source have biases or purposes that would lead it to slant information in a particular way or to lie outright? Politicians and political campaigns often "spin" information so that it seems to favor them and their positions. People in the community may do the same or may "know" things that don't happen to be true.
  • Does anyone, in particular, stand to benefit or lose if the information is accepted or rejected? To whose advantage is it if the information is taken at face value?
  • Is the information complete? Are there important pieces missing? Does it tell you everything you need to know? Is it based on enough data to be accurate?

Making sure you have all the information can make a huge difference. Your information might be that a certain approach to this same issue worked well in a similar community. However, what you might not know or think to ask is whether there's a reason that the same approach wouldn't work in this community. If you investigated, you might find it had been tried and failed for reasons that would doom it again. You'd need all the information before you could reasonably address the issue.

  • Is the information logically consistent? Does it make sense? Do arguments actually prove what they pretend to prove? Learning how to sort out logical and powerful arguments from inconsistent or meaningless ones is perhaps the hardest task for learners. Some helpful strategies here might include mock debates, where participants have to devise arguments for the side they disagree with; analysis of TV news programs, particularly those like "Meet the Press", where political figures defend their positions; and after-the-fact discussions of community or personal situations.

Just about anyone can come up with an example that "proves" a particular point: There's a woman down the block who cheats on welfare, so it's obvious that most welfare recipients cheat. You can't trust members of that ethnic group, because one of them stole my wallet.

Neither of these examples "proves" anything because it's based on only one instance, and there's no logical reason to assume it holds for a larger group. A former president was particularly fond of these kinds of "proofs", and as a result, often proposed simplistic solutions to complex social problems. Without information that's logically consistent and at least close to complete, you can't draw conclusions that will help you effectively address an issue.

  • Is the information clear? Do you understand what you're seeing?
  • Is the information relevant to the current situation? Information may be accurate, complete, logically consistent, powerful...and useless because it has nothing to do with what you're trying to deal with.

An AIDS prevention initiative, for instance, may find that a particular neighborhood has a large number of gay residents. However, if the HIV-positive rate in the gay community is nearly nonexistent, and the real AIDS problem in town is among IV drug users, the location of the gay community is irrelevant information.

  • Most important, is the information true? Outright lies and made-up "facts" are not uncommon in politics, community work, employment applications, and other situations. Knowing the source and its interests, understanding the situation, and being sensibly skeptical can protect learners from acting on false information.

Creative Commons License

recognizing bias requires critical thinking

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Are You Aware of Your Biases?

  • Carmen Acton

recognizing bias requires critical thinking

Four ways to overcome your shortcomings and be a more inclusive leader.

Often, it’s easy to “call out” people when we notice their microaggressions or biased behaviors. But it can be equally challenging to recognize and acknowledge our own unconscious biases. That said, becoming aware of your shortcomings can help you hone your leadership style, especially when you’re a new manager.

  • The first step is to acknowledge that you have biases and educate yourself to do better. Ask yourself: Do I hold stereotypes or assumptions about a particular social group? As a manager, do I acknowledge and leverage differences on my team? Use your answers to help you unlearn your unconscious assumptions.
  • When someone calls out your unconscious biases, try not to get defensive. Rather, assume positive intent and use their feedback as an opportunity to learn.
  • Reach out to a diverse group of peers to understand how they perceive you, and seek continuous feedback. These peers can also become “accountability buddies” who help you stay on track when you decide to change your behaviors.
  • Embrace diverse perspectives. If your close circle “looks” just like you, it’s time to build a more diverse network. Join an employee resource group or look to connect with colleagues whose backgrounds are different than your own.

Ascend logo

Where your work meets your life. See more from Ascend here .

When I became a manager for the first time, I had a clear vision of my leadership style: I wanted to value my team and treat everyone with respect. Once I took charge, I learned that leadership wasn’t as simple as I’d first imagined it.

recognizing bias requires critical thinking

  • Carmen Acton , MBA, PCC, is a  Leadership Impact Coach and Process Consultant in the San Francisco Bay Area, California. Carmen has worked in a succession of corporate leadership roles in a variety of disciplines ranging from Safety Engineering to Employee and Leadership Development. She has worked with clients in the oil and gas, food and beverage, technology, and health care sectors, to name a few. Her passion is helping clients elevate their leadership capabilities by sparking insights and actions that matter. She works with motivated, high-potential leaders to fully embrace humanity while elevating leadership and business performance in a complex world.

Partner Center

Kendall College of Art & Design

Critical Thinking & Evaluating Information

  • Critical Thinking Skills
  • Critical Thinking Questions
  • Fake News & Misinformation
  • Checkers & Cheat Sheets
  • Evaluate Using T.R.A.A.P.
  • Alternate Videos
  • Sources & Links

What is Bias?

                Sources of bias image bubble

Biases also play a role in how you approach all information. The short video below provides definitions of 12 types of cognitive biases.

There are two forms of bias of particular importance given today's information laden landscape, implicit bias and confirmation bias .

Implicit Bias & Confirmation Bias

Implicit / Unconscious Bias 

"Original definition (neutral) - Any personal preference, attitude, or expectation that unconsciously affects a person's outlook or behaviour.

Current definition (negative) - Unconscious favouritism towards or prejudice against people of a particular race, gender, or group that influences one's actions or perceptions; an instance of this."

"unconscious bias, n." OED Online, Oxford University Press, December 2020, www.oed.com/view/Entry/88686003 .

"Thoughts and feelings are “implicit” if we are unaware of them or mistaken about their nature. We have a bias when, rather than being neutral, we have a preference for (or aversion to) a person or group of people. Thus, we use the term “implicit bias” to describe when we have attitudes towards people or associate stereotypes with them without our conscious knowledge." 

https://perception.org/research/implicit-bias/

Confirmation Bias – "Originating in the field of psychology; the tendency to seek or favour new information which supports one’s existing theories or beliefs, while avoiding or rejecting that which disrupts them." 

Addition of definition to the Oxford Dictionary in 2019 

"confirmation, n." OED Online, Oxford University Press, December 2020, www.oed.com/view/Entry/38852. 

Simply put, confirmation bias is the tendency to seek out and/ or interpret new information as confirmation of one's existing beliefs or theories and to exclude contradictory or opposing information or points of view.

Put Bias in Check!

                Who, what, when, where, why, how blocks image

Now that you are aware of bias, your personal biases and bias that can be found in sources of information, you can put it in check . You should approach information objectively, neutrally and critically evaluate it. Numerous tools included in this course can help you do this, like the critical thinking cheat sheet in the previous module.

  • << Previous: Critical Thinking Questions
  • Next: Evaluating News & Media >>
  • Last Updated: Sep 9, 2021 12:09 PM
  • URL: https://ferris.libguides.com/criticalthinking

Ferris State University Imagine More

Bias and Critical Thinking

Note: The German version of this entry can be found here: Bias and Critical Thinking (German)

Note: This entry revolves more generally around Bias in science. For more thoughts on Bias and its relation to statistics, please refer to the entry on Bias in statistics .

In short: This entry discusses why science is never objective, and what we can really know.

  • 1 What is bias?
  • 2 Design criteria
  • 3 Bias in gathering data, analysing data and interpreting data
  • 4 Bias and philosophy
  • 5 Critical Theory and Bias
  • 6 Further Information

What is bias?

"The very concept of objective truth is fading out of the world." - George Orwell

A bias is “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment” (Cambridge Dictionary). In other words, bias clouds our judgment and often action in the sense that we act wrongly. We are all biased, because we are individuals with individual experiences, and are unconnected from other individuals and/or groups, or at least think we are unconnected.

Recognising bias in research is highly relevant, because bias exposes the myth of objectivity of research and enables a better recognition and reflection of our flaws and errors. In addition, one could add that understanding bias in science is relevant beyond the empirical, since bias can also highlight flaws in our perceptions and actions as humans. To this end, acknowledging bias is understanding the limitations of oneself. Prominent examples are gender bias and racial bias, which are often rooted in our societies, and can be deeply buried in our subconscious. To be critical researchers it is our responsibility to learn about the diverse biases we have, yet it is beyond this text to explore the subjective human bias we need to overcome. Just so much about the ethics of bias: many would argue that overcoming our biases requires the ability to learn and question our privileges. Within research we need to recognise that science has been severely and continuously biased against ethnic minorities, women, and many other groups. Institutional and systemic bias are part of the current reality of the system, and we need to do our utmost to change this - there is a need for debiasing science, and our own actions. While it should not go unnoticed that institutions and systems did already change, injustices and inequalities still exist. Most research is conducted in the global north, posing a neo-colonialistic problem that we are far from solving. Much of academia is still far away from having a diverse understanding of people, and systemic and institutional discrimination are parts of our daily reality. We are on the path of a very long journey, and there is much to be done concerning bias in constructed institutions.

All this being said, let us shift our attention now to bias in empirical research. Here, we show three different perspective in order to enable a more reflexive understanding of bias. The first is the understanding how different forms of biases relate to design criteria of scientific methods. The second is the question which stage in the application of methods - data gathering, data analysis, and interpretation of results - is affected by which bias, and how. Finally, the third approach is to look at the three principal theories of Western philosophy, namely reason, social contract and utilitarianism - to try and dismantle which of the three can be related to which bias. Many methods are influenced by bias, and recognising which bias affects which design criteria, research stage and principal philosophical theory in the application of a method can help to make empirical research more reflexive.

recognizing bias requires critical thinking

Design criteria

While qualitative research is often considered prone to many biases, it is also often more reflexive in recognising its limitations. Many qualitative methods are defined by a strong subjective component - i.e. of the researcher - and a clear documentation can thus help to make an existing bias more transparent. Many quantitative approaches have a reflexive canon that focuses on specific biases relevant for a specific approach, such as sampling bias or reporting bias. These are often less considered than in qualitative methods, since quantitative methods are still – falsely - considered to be more objective. This is not true. While one could argue that the goal of reproducibility may lead to a better taming of a bias, this is not necessarily so, as the crisis in psychology clearly shows. Both quantitative and qualitative methods are potentially strongly affected by several cognitive biases, as well as by bias in academia in general, which includes for instance funding bias or the preference of open access articles. While all this is not surprising, it is still all the much harder to solve.

Another general differentiation can be made between inductive and deductive approaches. Many deductive approaches are affected by bias that is associated to sampling. Inductive approaches are more associated to bias during interpretation. Deductive approaches often build around designed experiments, while the strongpoint of inductive approaches is being less bound by methodological designs, which can also make bias more hidden and thus harder to detect. However, this is why qualitative approaches often have an emphasis on a concise documentation.

The connection between spatial scales and bias is rather straightforward, since the individual focus is related to cognitive bias, while system scales are more associated to prejudices, bias in academia and statistical bias. While the impact of temporal bias is less explored, forecast bias is a prominent example when it comes to future predictions, and another error is applying our cultural views and values on past humans, which has yet to be clearly named as a bias. What can be clearly said about both spatial and temporal scales is that we are often irrationally biased towards very distant entities - in space or time - and even irrationally more than we should be. We are for instance inclined to reject the importance of a distant future scenario, although it may widely follow the same odds to become a reality than a close future. For example, almost everybody would like to win the lottery tomorrow rather than win the lottery in 20 years, irrespective of your chances to live and see it happen, or the longer time you may spend with your lottery prize for the (longer) time to come. Humans are most peculiar constructed beings, and we are notorious to act irrationally. This is equally true for spatial distance. We may care irrationally more for people that are close to us as compared to people that are very distant, even independent of joined experience (e.g with friends) or joined history (e.g. with family). Again, this infers a bias which we can be aware of, but which has to be named. No doubt the current social developments will increase our capacities to recognise our biases even more, as all these phenomena also affect scientists.

The following table categorizes different types of Bias as indicated in the Wikipedia entry on Bias according to two levels of the Design Criteria of Methods .

Bias in gathering data, analysing data and interpreting data

The three steps of the application of a method are clearly worth investigating, as it allows us to dismantle at which stage we may inflict a bias into our application of a method. Gathering data is strongly associated with cognitive bias, yet also to statistical bias and partly even to some bias in academia. Bias associated to sampling can be linked to a subjective perspective as well as to systematic errors rooted in previous results. This can also affect the analysis of data, yet here one has to highlight that quantitative methods are less affected by a bias through analysis than qualitative methods. This is not a normative judgement, and can clearly be counter-measured by a sound documentation of the analytical steps. We should nevertheless not forget that there are even different assumptions about the steps of analysis in such an established field as statistics. Here, different schools of thought constantly clash regarding the optimal approach of analysis, sometimes even with different results. This exemplifies that methodological analysis can be quite normative, underlining the need for a critical perspective. This is also the case in qualitative methods, yet there it strongly depends on the specific methods, as these methods are more diverse. Concerning the interpretation of scientific results, the amount and diversity of biases is clearly the highest, or in other words, worst. While this is related to the cognition bias we have as individuals, it is also related to prejudices, bias in academia and statistical bias. Overall, we need to recognise that some methods are less associated to certain biases because they are more established concerning the norms of their application, while other methods are new and less tested by the academic community. When it comes to bias, there can be at least a weak effect that safety - although not diversity - concerning methods comes in numbers. More and diverse methods may offer new insights on biases, since one method may reveal a bias that another method cannot reveal. Methodological plurality may reduce bias. For a fully established method the understanding of its bias is often larger, because the number of times it has been applied is larger. This is especially but not always true for the analysis step, and in parts also for some methodological designs concerned with sampling. Clear documentation is however key to make bias more visible among the three stages.

Bias and philosophy

The last and by far most complex point is the root theories associated to bias. Reason, social contract and utilitarianism are the three key theories of Western philosophy relevant for empiricism, and all biases can be at least associated to one of these three foundational theories. Many cognitive bias are linked to reason or unreasonable behaviour. Much of bias relates to prejudices and society can be linked to the wide field of social contract. Lastly, some bias is clearly associated with utilitarianism. Surprisingly, utilitarianism is associated to a low amount of bias, yet it should be noted that the problem of causality within economical analysis is still up for debate. Much of economic management is rooted in correlative understandings, which are often mistaken for clear-cut causal relations. Psychology also clearly illustrates that investigating a bias is different from unconsciously inferring a bias into your research. Consciousness of bias is the basis for its recognition : if you are not aware of bias, you cannot take it into account regarding your knowledge production. While it thus seems not directly helpful to associate empirical research and its biases to the three general foundational theories of philosophy - reason, social contract and utilitatrianism -, we should still take this into account, least of all at it leads us to one of the most important developments of the 20th century: Critical Theory.

Critical Theory and Bias

Out of the growing empiricism of the enlightenment there grew a concern which we came to call Critical Theory. At the heart of critical theory is the focus on critiquing and changing society as a whole, in contrast to only observing or explaining it. Originating in Marx, Critical Theory consists of a clear distancing from previous theories in philosophy - or associated with the social - that try to understand or explain. By embedding society in its historical context (Horkheimer) and by focussing on a continuous and interchanging critique (Benjamin) Critical Theory is a first and bold step towards a more holistic perspective in science. Remembering the Greeks and also some Eastern thinkers, one could say it is the first step back to a holistic thinking. From a methodological perspective, Critical Theory is radical because it seeks to distinguish itself not only from previously existing philosophy, but more importantly from the widely dominating empiricism, and its societal as well as scientific consequences. A Critical Theory should thus be explanatory, practical and normative, and what makes it more challenging, it needs to be all these three things combined (Horkheimer). Through Habermas, Critical Theory got an embedding in democracy, yet with a critical view of what we could understand as globalisation and its complex realities. The reflexive empowerment of the individual is as much as a clear goal as one would expect, also because of the normative link to the political.

Critical Theory is thus a vital step towards a wider integration of diverse philosophies, but also from a methodological standpoint it is essential since it allowed for the emergence of a true and holistic critique of everything empirical. While this may be valued as an attack, is can also be interpreted as necessary step, since the arrogance and the claim of truth in empiricism can be interpreted not only as a deep danger to methods. Popper does not offer a true solution to positivism, and indeed he was very much hyped by many. His thought that the holy grail of knowledge can ultimately be never truly reached also generates certain problems. He can still be admired because he called for scientists to be radical, while acknowledging that most scientists are not radical. In addition, we could see it from a post-modernist perspective as a necessary step to prevent an influence of empiricism that might pose a threat to and by humankind itself, may it be through nuclear destruction, the unachievable and feeble goal of a growth economy (my wording), the naive and technocratic hoax of the eco modernists (also my wording) or any other paradigm that is short-sighted or naive. In other words, we look at the postmodern.

Critical Theory to this end is now developing to connect to other facets of the discourse, and some may argue that its focus onto the social science can be seen critical in itself, or at least as a normative choice that is clearly anthropocentric, has a problematic relationship with the empirical, and has mixed relations with its diverse offspring that includes gender research, critique of globalisation, and many other normative domains that are increasingly explored today. Building on the three worlds of Popper (the physical world, the mind world, human knowledge), we should note another possibility, that is Critical Realism. Roy Bhaskar proposed three ontological domains ( strata of knowledge ): the real (which is everything there is ), the actual ( everything we can grasp ), and the empirical ( everything we can observe ). During the last decade, humankind unlocked ever more strata of knowledge, hence much of the actual became empirical to us. We have to acknowledge that some strata of knowledge are hard to relate, or may even be unrelatable, which has consequences for our methodological understanding of the world. Some methods may unlock some strata of knowledge but not others. Some may be specific, some vague. And some may only unlock new strata based on a novel combinations. What is most relevant to this end is however, that we might look for causal links, but need to be critical that new strata of knowledge may make them obsolete. Consequently, there are no universal laws that we can thrive for, but instead endless strata to explore.

Coming back to bias, Critical Theory seems as an antidote to bias , and some may argue Critical Realism even more so, as it combines the criticality with a certain humbleness necessary when exploring the empirical and causal. The explanatory characteristic allowed by Critical Realism might be good enough for the pragmatist, the practical may speak to the modern engagement of science with and for society, and the normative is aware of – well - all things normative, including the critical. Hence a door was opened to a new mode of science, focussing on the situation and locatedness of research within the world. This was surely a head start with Kant, who opened the globe to the world of methods. There is however a critical link in Habermas, who highlighted the duality of the rational individual on a small scale and the role of global societies as part of the economy (Habermas 1987). This underlines a crucial link to the original three foundational theories in philosophy, albeit in a dramatic and focused interpretation of modernity. Habermas himself was well aware of the tensions between these two approaches – the critical and the empirical -, yet we owe it to Critical Theory and its continuations that a practical and reflexive knowledge production can be conducted within deeply normative systems such as modern democracies.

Linking to the historical development of methods, we can thus clearly claim that Critical Theory (and Critical Realism) opened a new domain or mode of thinking, and its impact can be widely felt way beyond the social science and philosophy that it affected directly. However, coming back to bias, the answer to an almost universal rejection of empiricism will not be followed here . Instead, we need to come back to the three foundational theories of philosophy, and need to acknowledge that reason, social contract and utilitarianism are the foundation of the first empirical disciplines that are at their core normative (e.g. psychology, social and political science, and economics). Since bias can be partly related to these three theories, and consequentially to specific empirical disciplines, we need to recognise that there is an overarching methodological bias. This methodological bias has a signature rooted in specific design criteria, which are in turn related to specific disciplines. Consequently, this methodological bias is a disciplinary bias - even more so, since methods may be shared among scientific disciplines, but most disciplines claim either priority or superiority when it comes to the ownership of a method.

The disciplinary bias of modern science thus creates a deeply normative methodological bias, which some disciplines may try to take into account yet others clearly not. In other words, the dogmatic selection of methods within disciplines has the potential to create deep flaws in empirical research, and we need to be aware and reflexive about this. The largest bias concerning methods is the choice of methods per se. A critical perspective is thus not only of relevance from a perspective of societal responsibility, but equally from a view on the empirical. Clear documentation and reproducibility of research are important but limited stepping stones in a critique of the methodological. This cannot replace a critical perspective, but only amends it. Empirical knowledge will only look at parts - or strata according to Roy Bhaskar - of reality, yet philosophy can offer a generalisable perspective or theory, and Critical Theory, Critical Realism as well as other current developments of philosophy can be seen as a thriving towards an integrated and holistic philosophy of science, which may ultimately link to an overaching theory of ethics (Parfit). If the empirical and the critical inform us, then both a philosophy of science and ethics may tell us how we may act based on our perceptions of reality.

Further Information

Some words on Critical Theory A short entry on critical realism

The author of this entry is Henrik von Wehrden.

  • Normativity of Methods

Powered by MediaWiki

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

2.2: Overcoming Cognitive Biases and Engaging in Critical Reflection

  • Last updated
  • Save as PDF
  • Page ID 162135

  • Nathan Smith et al.

Learning Objectives

By the end of this section, you will be able to:

  • Label the conditions that make critical thinking possible.
  • Classify and describe cognitive biases.
  • Apply critical reflection strategies to resist cognitive biases.

To resist the potential pitfalls of cognitive biases, we have taken some time to recognize why we fall prey to them. Now we need to understand how to resist easy, automatic, and error-prone thinking in favor of more reflective, critical thinking.

Critical Reflection and Metacognition

To promote good critical thinking, put yourself in a frame of mind that allows critical reflection. Recall from the previous section that rational thinking requires effort and takes longer. However, it will likely result in more accurate thinking and decision-making. As a result, reflective thought can be a valuable tool in correcting cognitive biases. The critical aspect of critical reflection involves a willingness to be skeptical of your own beliefs, your gut reactions, and your intuitions. Additionally, the critical aspect engages in a more analytic approach to the problem or situation you are considering. You should assess the facts, consider the evidence, try to employ logic, and resist the quick, immediate, and likely conclusion you want to draw. By reflecting critically on your own thinking, you can become aware of the natural tendency for your mind to slide into mental shortcuts.

This process of critical reflection is often called metacognition in the literature of pedagogy and psychology. Metacognition means thinking about thinking and involves the kind of self-awareness that engages higher-order thinking skills. Cognition, or the way we typically engage with the world around us, is first-order thinking, while metacognition is higher-order thinking. From a metacognitive frame, we can critically assess our thought process, become skeptical of our gut reactions and intuitions, and reconsider our cognitive tendencies and biases.

To improve metacognition and critical reflection, we need to encourage the kind of self-aware, conscious, and effortful attention that may feel unnatural and may be tiring. Typical activities associated with metacognition include checking, planning, selecting, inferring, self-interrogating, interpreting an ongoing experience, and making judgments about what one does and does not know (Hackner, Dunlosky, and Graesser 1998). By practicing metacognitive behaviors, you are preparing yourself to engage in the kind of rational, abstract thought that will be required for philosophy.

Good study habits, including managing your workspace, giving yourself plenty of time, and working through a checklist, can promote metacognition. When you feel stressed out or pressed for time, you are more likely to make quick decisions that lead to error. Stress and lack of time also discourage critical reflection because they rob your brain of the resources necessary to engage in rational, attention-filled thought. By contrast, when you relax and give yourself time to think through problems, you will be clearer, more thoughtful, and less likely to rush to the first conclusion that leaps to mind. Similarly, background noise, distracting activity, and interruptions will prevent you from paying attention. You can use this checklist to try to encourage metacognition when you study:

  • Check your work.
  • Plan ahead.
  • Select the most useful material.
  • Infer from your past grades to focus on what you need to study.
  • Ask yourself how well you understand the concepts.
  • Check your weaknesses.
  • Assess whether you are following the arguments and claims you are working on.

Cognitive Biases

In this section, we will examine some of the most common cognitive biases so that you can be aware of traps in thought that can lead you astray. Cognitive biases are closely related to informal fallacies. Both fallacies and biases provide examples of the ways we make errors in reasoning.

CONNECTIONS

See the chapter on logic and reasoning for an in-depth exploration of informal fallacies.

Watch the video to orient yourself before reading the text that follows.

Cognitive Biases 101, with Peter Bauman

Click to view content

Confirmation Bias

One of the most common cognitive biases is confirmation bias , which is the tendency to search for, interpret, favor, and recall information that confirms or supports your prior beliefs. Like all cognitive biases, confirmation bias serves an important function. For instance, one of the most reliable forms of confirmation bias is the belief in our shared reality. Suppose it is raining. When you first hear the patter of raindrops on your roof or window, you may think it is raining. You then look for additional signs to confirm your conclusion, and when you look out the window, you see rain falling and puddles of water accumulating. Most likely, you will not be looking for irrelevant or contradictory information. You will be looking for information that confirms your belief that it is raining. Thus, you can see how confirmation bias—based on the idea that the world does not change dramatically over time—is an important tool for navigating in our environment.

Unfortunately, as with most heuristics, we tend to apply this sort of thinking inappropriately. One example that has recently received a lot of attention is the way in which confirmation bias has increased political polarization. When searching for information on the internet about an event or topic, most people look for information that confirms their prior beliefs rather than what undercuts them. The pervasive presence of social media in our lives is exacerbating the effects of confirmation bias since the computer algorithms used by social media platforms steer people toward content that reinforces their current beliefs and predispositions. These multimedia tools are especially problematic when our beliefs are incorrect (for example, they contradict scientific knowledge) or antisocial (for example, they support violent or illegal behavior). Thus, social media and the internet have created a situation in which confirmation bias can be “turbocharged” in ways that are destructive for society.

Confirmation bias is a result of the brain’s limited ability to process information. Peter Wason (1960) conducted early experiments identifying this kind of bias. He asked subjects to identify the rule that applies to a sequence of numbers—for instance, 2, 4, 8. Subjects were told to generate examples to test their hypothesis. What he found is that once a subject settled on a particular hypothesis, they were much more likely to select examples that confirmed their hypothesis rather than negated it. As a result, they were unable to identify the real rule (any ascending sequence of numbers) and failed to “falsify” their initial assumptions. Falsification is an important tool in the scientist’s toolkit when they are testing hypotheses and is an effective way to avoid confirmation bias.

In philosophy, you will be presented with different arguments on issues, such as the nature of the mind or the best way to act in a given situation. You should take your time to reason through these issues carefully and consider alternative views. What you believe to be the case may be right, but you may also fall into the trap of confirmation bias, seeing confirming evidence as better and more convincing than evidence that calls your beliefs into question.

Anchoring Bias

Confirmation bias is closely related to another bias known as anchoring. Anchoring bias refers to our tendency to rely on initial values, prices, or quantities when estimating the actual value, price, or quantity of something. If you are presented with a quantity, even if that number is clearly arbitrary, you will have a hard discounting it in your subsequent calculations; the initial value “anchors” subsequent estimates. For instance, Tversky and Kahneman (1974) reported an experiment in which subjects were asked to estimate the number of African nations in the United Nations. First, the experimenters spun a wheel of fortune in front of the subjects that produced a random number between 0 and 100. Let’s say the wheel landed on 79. Subjects were asked whether the number of nations was higher or lower than the random number. Subjects were then asked to estimate the real number of nations. Even though the initial anchoring value was random, people in the study found it difficult to deviate far from that number. For subjects receiving an initial value of 10, the median estimate of nations was 25, while for subjects receiving an initial value of 65, the median estimate was 45.

In the same paper, Tversky and Kahneman described the way that anchoring bias interferes with statistical reasoning. In a number of scenarios, subjects made irrational judgments about statistics because of the way the question was phrased (i.e., they were tricked when an anchor was inserted into the question). Instead of expending the cognitive energy needed to solve the statistical problem, subjects were much more likely to “go with their gut,” or think intuitively. That type of reasoning generates anchoring bias. When you do philosophy, you will be confronted with some formal and abstract problems that will challenge you to engage in thinking that feels difficult and unnatural. Resist the urge to latch on to the first thought that jumps into your head, and try to think the problem through with all the cognitive resources at your disposal.

Availability Heuristic

The availability heuristic refers to the tendency to evaluate new information based on the most recent or most easily recalled examples. The availability heuristic occurs when people take easily remembered instances as being more representative than they objectively are (i.e., based on statistical probabilities). In very simple situations, the availability of instances is a good guide to judgments. Suppose you are wondering whether you should plan for rain. It may make sense to anticipate rain if it has been raining a lot in the last few days since weather patterns tend to linger in most climates. More generally, scenarios that are well-known to us, dramatic, recent, or easy to imagine are more available for retrieval from memory. Therefore, if we easily remember an instance or scenario, we may incorrectly think that the chances are high that the scenario will be repeated. For instance, people in the United States estimate the probability of dying by violent crime or terrorism much more highly than they ought to. In fact, these are extremely rare occurrences compared to death by heart disease, cancer, or car accidents. But stories of violent crime and terrorism are prominent in the news media and fiction. Because these vivid stories are dramatic and easily recalled, we have a skewed view of how frequently violent crime occurs.

Another more loosely defined category of cognitive bias is the tendency for human beings to align themselves with groups with whom they share values and practices. The tendency toward tribalism is an evolutionary advantage for social creatures like human beings. By forming groups to share knowledge and distribute work, we are much more likely to survive. Not surprisingly, human beings with pro-social behaviors persist in the population at higher rates than human beings with antisocial tendencies. Pro-social behaviors, however, go beyond wanting to communicate and align ourselves with other human beings; we also tend to see outsiders as a threat. As a result, tribalistic tendencies both reinforce allegiances among in-group members and increase animosity toward out-group members.

Tribal thinking makes it hard for us to objectively evaluate information that either aligns with or contradicts the beliefs held by our group or tribe. This effect can be demonstrated even when in-group membership is not real or is based on some superficial feature of the person—for instance, the way they look or an article of clothing they are wearing. A related bias is called the bandwagon fallacy . The bandwagon fallacy can lead you to conclude that you ought to do something or believe something because many other people do or believe the same thing. While other people can provide guidance, they are not always reliable. Furthermore, just because many people believe something doesn’t make it true. Watch the video below to improve your “tribal literacy” and understand the dangers of this type of thinking.

The Dangers of Tribalism, Kevin deLaplante

Sunk Cost Fallacy

Sunk costs refer to the time, energy, money, or other costs that have been paid in the past. These costs are “sunk” because they cannot be recovered. The sunk cost fallacy is thinking that attaches a value to things in which you have already invested resources that is greater than the value those things have today. Human beings have a natural tendency to hang on to whatever they invest in and are loath to give something up even after it has been proven to be a liability. For example, a person may have sunk a lot of money into a business over time, and the business may clearly be failing. Nonetheless, the businessperson will be reluctant to close shop or sell the business because of the time, money, and emotional energy they have spent on the venture. This is the behavior of “throwing good money after bad” by continuing to irrationally invest in something that has lost its worth because of emotional attachment to the failed enterprise. People will engage in this kind of behavior in all kinds of situations and may continue a friendship, a job, or a marriage for the same reason—they don’t want to lose their investment even when they are clearly headed for failure and ought to cut their losses.

A similar type of faulty reasoning leads to the gambler’s fallacy , in which a person reasons that future chance events will be more likely if they have not happened recently. For instance, if I flip a coin many times in a row, I may get a string of heads. But even if I flip several heads in a row, that does not make it more likely I will flip tails on the next coin flip. Each coin flip is statistically independent, and there is an equal chance of turning up heads or tails. The gambler, like the reasoner from sunk costs, is tied to the past when they should be reasoning about the present and future.

There are important social and evolutionary purposes for past-looking thinking. Sunk-cost thinking keeps parents engaged in the growth and development of their children after they are born. Sunk-cost thinking builds loyalty and affection among friends and family. More generally, a commitment to sunk costs encourages us to engage in long-term projects, and this type of thinking has the evolutionary purpose of fostering culture and community. Nevertheless, it is important to periodically reevaluate our investments in both people and things.

In recent ethical scholarship, there is some debate about how to assess the sunk costs of moral decisions. Consider the case of war. Just-war theory dictates that wars may be justified in cases where the harm imposed on the adversary is proportional to the good gained by the act of defense or deterrence. It may be that, at the start of the war, those costs seemed proportional. But after the war has dragged on for some time, it may seem that the objective cannot be obtained without a greater quantity of harm than had been initially imagined. Should the evaluation of whether a war is justified estimate the total amount of harm done or prospective harm that will be done going forward (Lazar 2018)? Such questions do not have easy answers.

Table 2.1 summarizes these common cognitive biases.

Table 2.1 Common Cognitive Biases

Think Like A Philosopher

As we have seen, cognitive biases are built into the way human beings process information. They are common to us all, and it takes self-awareness and effort to overcome the tendency to fall back on biases. Consider a time when you have fallen prey to one of the five cognitive biases described above. What were the circumstances? Recall your thought process. Were you aware at the time that your thinking was misguided? What were the consequences of succumbing to that cognitive bias?

Write a short paragraph describing how that cognitive bias allowed you to make a decision you now realize was irrational. Then write a second paragraph describing how, with the benefit of time and distance, you would have thought differently about the incident that triggered the bias. Use the tools of critical reflection and metacognition to improve your approach to this situation. What might have been the consequences of behaving differently? Finally, write a short conclusion describing what lesson you take from reflecting back on this experience. Does it help you understand yourself better? Will you be able to act differently in the future? What steps can you take to avoid cognitive biases in your thinking today?

  • Top Courses
  • Online Degrees
  • Find your New Career
  • Join for Free

What Are Critical Thinking Skills and Why Are They Important?

Learn what critical thinking skills are, why they’re important, and how to develop and apply them in your workplace and everyday life.

[Featured Image]:  Project Manager, approaching  and analyzing the latest project with a team member,

We often use critical thinking skills without even realizing it. When you make a decision, such as which cereal to eat for breakfast, you're using critical thinking to determine the best option for you that day.

Critical thinking is like a muscle that can be exercised and built over time. It is a skill that can help propel your career to new heights. You'll be able to solve workplace issues, use trial and error to troubleshoot ideas, and more.

We'll take you through what it is and some examples so you can begin your journey in mastering this skill.

What is critical thinking?

Critical thinking is the ability to interpret, evaluate, and analyze facts and information that are available, to form a judgment or decide if something is right or wrong.

More than just being curious about the world around you, critical thinkers make connections between logical ideas to see the bigger picture. Building your critical thinking skills means being able to advocate your ideas and opinions, present them in a logical fashion, and make decisions for improvement.

Coursera Plus

Build job-ready skills with a Coursera Plus subscription

  • Get access to 7,000+ learning programs from world-class universities and companies, including Google, Yale, Salesforce, and more
  • Try different courses and find your best fit at no additional cost
  • Earn certificates for learning programs you complete
  • A subscription price of $59/month, cancel anytime

Why is critical thinking important?

Critical thinking is useful in many areas of your life, including your career. It makes you a well-rounded individual, one who has looked at all of their options and possible solutions before making a choice.

According to the University of the People in California, having critical thinking skills is important because they are [ 1 ]:

Crucial for the economy

Essential for improving language and presentation skills

Very helpful in promoting creativity

Important for self-reflection

The basis of science and democracy 

Critical thinking skills are used every day in a myriad of ways and can be applied to situations such as a CEO approaching a group project or a nurse deciding in which order to treat their patients.

Examples of common critical thinking skills

Critical thinking skills differ from individual to individual and are utilized in various ways. Examples of common critical thinking skills include:

Identification of biases: Identifying biases means knowing there are certain people or things that may have an unfair prejudice or influence on the situation at hand. Pointing out these biases helps to remove them from contention when it comes to solving the problem and allows you to see things from a different perspective.

Research: Researching details and facts allows you to be prepared when presenting your information to people. You’ll know exactly what you’re talking about due to the time you’ve spent with the subject material, and you’ll be well-spoken and know what questions to ask to gain more knowledge. When researching, always use credible sources and factual information.

Open-mindedness: Being open-minded when having a conversation or participating in a group activity is crucial to success. Dismissing someone else’s ideas before you’ve heard them will inhibit you from progressing to a solution, and will often create animosity. If you truly want to solve a problem, you need to be willing to hear everyone’s opinions and ideas if you want them to hear yours.

Analysis: Analyzing your research will lead to you having a better understanding of the things you’ve heard and read. As a true critical thinker, you’ll want to seek out the truth and get to the source of issues. It’s important to avoid taking things at face value and always dig deeper.

Problem-solving: Problem-solving is perhaps the most important skill that critical thinkers can possess. The ability to solve issues and bounce back from conflict is what helps you succeed, be a leader, and effect change. One way to properly solve problems is to first recognize there’s a problem that needs solving. By determining the issue at hand, you can then analyze it and come up with several potential solutions.

How to develop critical thinking skills

You can develop critical thinking skills every day if you approach problems in a logical manner. Here are a few ways you can start your path to improvement:

1. Ask questions.

Be inquisitive about everything. Maintain a neutral perspective and develop a natural curiosity, so you can ask questions that develop your understanding of the situation or task at hand. The more details, facts, and information you have, the better informed you are to make decisions.

2. Practice active listening.

Utilize active listening techniques, which are founded in empathy, to really listen to what the other person is saying. Critical thinking, in part, is the cognitive process of reading the situation: the words coming out of their mouth, their body language, their reactions to your own words. Then, you might paraphrase to clarify what they're saying, so both of you agree you're on the same page.

3. Develop your logic and reasoning.

This is perhaps a more abstract task that requires practice and long-term development. However, think of a schoolteacher assessing the classroom to determine how to energize the lesson. There's options such as playing a game, watching a video, or challenging the students with a reward system. Using logic, you might decide that the reward system will take up too much time and is not an immediate fix. A video is not exactly relevant at this time. So, the teacher decides to play a simple word association game.

Scenarios like this happen every day, so next time, you can be more aware of what will work and what won't. Over time, developing your logic and reasoning will strengthen your critical thinking skills.

Learn tips and tricks on how to become a better critical thinker and problem solver through online courses from notable educational institutions on Coursera. Start with Introduction to Logic and Critical Thinking from Duke University or Mindware: Critical Thinking for the Information Age from the University of Michigan.

Article sources

University of the People, “ Why is Critical Thinking Important?: A Survival Guide , https://www.uopeople.edu/blog/why-is-critical-thinking-important/.” Accessed May 18, 2023.

Keep reading

Coursera staff.

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

What is Critical Thinking?

What About Assumptions?

Assumptions are beliefs or ideas that are  believed to be true without proof or evidence and are used to support reasoning. This lack of verification can create bias when thinking critically. Like any human activity, the practice of critical thinking requires several basic assumptions to make sense. For people who don’t share these assumptions, the whole process can be experienced as confusing or nonsensical. Here is a partial list of assumptions that sometimes cause trouble for people new to critical thinking.

recognizing bias requires critical thinking

  • In CT, reasoning implies evaluation, both individual (“You should recycle your aluminum!”) and collective (“We should abolish the death penalty!”). Each statement can be supported by reasons, and the reasons can be evaluated as better or worse. Although this should not be confused with opinions or facts .
  • In CT, “Truth is what is so  about something, the reality of the matter, as distinguished from what people wish  were so, believe  to be so, or assert to be so” (Ruggiero, 2015, p. 25)
  • When using critical thinking you should not contradict yourself. Contradictory statements , by definition, cannot all be true, and based on #3 above that means they can’t be partly true, or true to some people but not others.
  • Critical thinking requires judging other people’s opinions (along with our own!) – not in isolation, but in relation to each other.

Many people put the majority of their critical thinking energy into judging the thinking of those they disagree with (fast thinking). Our hope is that you will have come to understand that thinking carefully about your own beliefs is worth more of your time, and that you will have come to appreciate the vital importance of people who do not share your same ideas to your process of slow thinking.

Check Your Knowledge: Assumptions

Read the following statements and then determine the assumption.

The U.S. is overreacting to the growth of AI. Technology is meant to be utilized to its fullest.

“Eating healthy is important. Doctors and physical fitness advisors tell you about the advantages of health foods. Then why are these foods so expensive? Companies that sell these foods are raising prices for simple things such as fruits and vegetables….People want to be healthy but it seems that corporate America really doesn’t want to make that prospect cheap. You should avoid wasting money just to eat healthy; go buy cheap frozen vegetables at the grocery store.” (Browne & Keeley, 2018, p.56)

Taking an act or statement for granted (Merriam-Webster Online)

Judgements about good or bad, right or wrong

Evaluations for better or worse

A view or judgement

Something known or proven true

False statement

Critical Thinking in Academic Research - Second Edition Copyright © 2022 by Cindy Gruwell and Robin Ewing is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10672018

Logo of jintell

Critical Thinking, Intelligence, and Unsubstantiated Beliefs: An Integrative Review

Associated data.

This research did not involve collection of original data, and hence there are no new data to make available.

A review of the research shows that critical thinking is a more inclusive construct than intelligence, going beyond what general cognitive ability can account for. For instance, critical thinking can more completely account for many everyday outcomes, such as how thinkers reject false conspiracy theories, paranormal and pseudoscientific claims, psychological misconceptions, and other unsubstantiated claims. Deficiencies in the components of critical thinking (in specific reasoning skills, dispositions, and relevant knowledge) contribute to unsubstantiated belief endorsement in ways that go beyond what standardized intelligence tests test. Specifically, people who endorse unsubstantiated claims less tend to show better critical thinking skills, possess more relevant knowledge, and are more disposed to think critically. They tend to be more scientifically skeptical and possess a more rational–analytic cognitive style, while those who accept unsubstantiated claims more tend to be more cynical and adopt a more intuitive–experiential cognitive style. These findings suggest that for a fuller understanding of unsubstantiated beliefs, researchers and instructors should also assess specific reasoning skills, relevant knowledge, and dispositions which go beyond what intelligence tests test.

1. Introduction

Why do some people believe implausible claims, such as the QAnon conspiracy theory, that a cabal of liberals is kidnapping and trafficking many thousands of children each year, despite the lack of any credible supporting evidence? Are believers less intelligent than non-believers? Do they lack knowledge of such matters? Are they more gullible or less skeptical than non-believers? Or, more generally, are they failing to think critically?

Understanding the factors contributing to acceptance of unsubstantiated claims is important, not only to the development of theories of intelligence and critical thinking but also because many unsubstantiated beliefs are false, and some are even dangerous. Endorsing them can have a negative impact on an individual and society at large. For example, false beliefs about the COVID-19 pandemic, such as believing that 5G cell towers induced the spread of the COVID-19 virus, led some British citizens to set fire to 5G towers ( Jolley and Paterson 2020 ). Other believers in COVID-19 conspiracy theories endangered their own and their children’s lives when they refused to socially distance and be vaccinated with highly effective vaccines, despite the admonitions of scientific experts ( Bierwiaczonek et al. 2020 ). Further endangering the population at large, those who believe the false conspiracy theory that human-caused global warming is a hoax likely fail to respond adaptively to this serious global threat ( van der Linden 2015 ). Parents, who uncritically accept pseudoscientific claims, such as the false belief that facilitated communication is an effective treatment for childhood autism, may forego more effective treatments ( Lilienfeld 2007 ). Moreover, people in various parts of the world still persecute other people whom they believe are witches possessing supernatural powers. Likewise, many people still believe in demonic possession, which has been associated with mental disorders ( Nie and Olson 2016 ). Compounding the problems created by these various unsubstantiated beliefs, numerous studies now show that when someone accepts one of these types of unfounded claims, they tend to accept others as well; see Bensley et al. ( 2022 ) for a review.

Studying the factors that contribute to unfounded beliefs is important not only because of their real-world consequences but also because this can facilitate a better understanding of unfounded beliefs and how they are related to critical thinking and intelligence. This article focuses on important ways in which critical thinking and intelligence differ, especially in terms of how a comprehensive model of CT differs from the view of intelligence as general cognitive ability. I argue that this model of CT more fully accounts for how people can accurately decide if a claim is unsubstantiated than can views of intelligence, emphasizing general cognitive ability. In addition to general cognitive ability, thinking critically about unsubstantiated claims involves deployment of specific reasoning skills, dispositions related to CT, and specific knowledge, which go beyond the contribution of general cognitive ability.

Accordingly, this article begins with an examination of the constructs of critical thinking and intelligence. Then, it discusses theories proposing that to understand thinking in the real world requires going beyond general cognitive ability. Specifically, the focus is on factors related to critical thinking, such as specific reasoning skills, dispositions, metacognition, and relevant knowledge. I review research showing that that this alternative multidimensional view of CT can better account for individual differences in the tendency to endorse multiple types of unsubstantiated claims than can general cognitive ability alone.

2. Defining Critical Thinking and Intelligence

Critical thinking is an almost universally valued educational objective in the US and in many other countries which seek to improve it. In contrast, intelligence, although much valued, has often been viewed as a more stable characteristic and less amenable to improvement through specific short-term interventions, such as traditional instruction or more recently through practice on computer-implemented training programs. According to Wechsler’s influential definition, intelligence is a person’s “aggregate or global capacity to act purposefully, to think rationally, and to deal effectively with his environment” ( Wechsler 1944, p. 3 ).

Consistent with this definition, intelligence has long been associated with general cognitive or intellectual ability and the potential to learn and reason well. Intelligence (IQ) tests measure general cognitive abilities, such as knowledge of words, memory skills, analogical reasoning, speed of processing, and the ability to solve verbal and spatial problems. General intelligence or “g” is a composite of these abilities statistically derived from various cognitive subtests on IQ tests which are positively intercorrelated. There is considerable overlap between g and the concept of fluid intelligence (Gf) in the prominent Cattell–Horn–Carroll model ( McGrew 2009 ), which refers to “the ability to solve novel problems, the solution of which does not depend on previously acquired skills and knowledge,” and crystalized intelligence (Gc), which refers to experience, existing skills, and general knowledge ( Conway and Kovacs 2018, pp. 50–51 ). Although g or general intelligence is based on a higher order factor, inclusive of fluid and crystallized intelligence, it is technically not the same as general cognitive ability, a commonly used, related term. However, in this article, I use “general cognitive ability” and “cognitive ability” because they are the imprecise terms frequently used in the research reviewed.

Although IQ scores have been found to predict performance in basic real-world domains, such as academic performance and job success ( Gottfredson 2004 ), an enduring question for intelligence researchers has been whether g and intelligence tests predict the ability to adapt well in other real-world situations, which concerns the second part of Wechsler’s definition. So, in addition to the search for the underlying structure of intelligence, researchers have been perennially concerned with how general abilities associated with intelligence can be applied to help a person adapt to real-world situations. The issue is largely a question of how cognitive ability and intelligence can help people solve real-world problems and cope adaptively and succeed in dealing with various environmental demands ( Sternberg 2019 ).

Based on broad conceptual definitions of intelligence and critical thinking, both intelligence and CT should aid adaptive functioning in the real world, presumably because they both involve rational approaches. Their common association with rationality gives each term a positive connotation. However, complicating the definition of each of these is the fact that rationality also continues to have a variety of meanings. In this article, in agreement with Stanovich et al. ( 2018 ), rationality is defined in the normative sense, used in cognitive science, as the distance between a person’s response and some normative standard of optimal behavior. As such, degree of rationality falls on a continuous scale, not a categorical one.

Despite disagreements surrounding the conceptual definitions of intelligence, critical thinking, and rationality, a commonality in these terms is they are value-laden and normative. In the case of intelligence, people are judged based on norms from standardized intelligence tests, especially in academic settings. Although scores on CT tests seldom are, nor could be, used to judge individuals in this way, the normative and value-laden basis of CT is apparent in people’s informal judgements. They often judge others who have made poor decisions to be irrational or to have failed to think critically.

This value-laden aspect of CT is also apparent in formal definitions of CT. Halpern and Dunn ( 2021 ) defined critical thinking as “the use of those cognitive skills or strategies that increase the probability of a desirable outcome. It is used to describe thinking that is purposeful, reasoned, and goal-directed.” The positive conception of CT as helping a person adapt well to one’s environment is clearly implied in “desirable outcome”.

Robert Ennis ( 1987 ) has offered a simpler, yet useful definition of critical thinking that also has normative implications. According to Ennis, “critical thinking is reasonable, reflective thinking focused on deciding what to believe or do” ( Ennis 1987, p. 102 ). This definition implies that CT helps people know what to believe (a goal of epistemic rationality) and how to act (a goal of instrumental rationality). This is conveyed by associating “critical thinking” with the positive terms, “reasonable” and “reflective”. Dictionaries commonly define “reasonable” as “rational”, “logical”, “intelligent”, and “good”, all terms with positive connotations.

For critical thinkers, being reasonable involves using logical rules, standards of evidence, and other criteria that must be met for a product of thinking to be considered good. Critical thinkers use these to evaluate how strongly reasons or evidence supports one claim versus another, drawing conclusions which are supported by the highest quality evidence ( Bensley 2018 ). If no high-quality evidence is available for consideration, it would be unreasonable to draw a strong conclusion. Unfortunately, people’s beliefs are too often based on acceptance of unsubstantiated claims. This is a failure of CT, but is it also a failure of intelligence?

3. Does Critical Thinking “Go Beyond” What Is Meant by Intelligence?

Despite the conceptual overlap in intelligence and CT at a general level, one way that CT can be distinguished from the common view of intelligence as general cognitive ability is in terms of what each can account for. Although intelligence tests, especially measures of general cognitive ability, have reliably predicted academic and job performance, they may not be sufficient to predict other everyday outcomes for which CT measures have made successful predictions and have added to the variance accounted for in performance. For instance, replicating a study by Butler ( 2012 ), Butler et al. ( 2017 ) obtained a negative correlation ( r = −0.33) between scores on the Halpern Critical Thinking Appraisal (HCTA) and a measure of 134 negative, real-world outcomes, not expected to befall critical thinkers, such as engaging in unprotected sex or posting a message on social media which the person regretted. They found that higher HCTA scores not only predicted better life decisions, but also predicted better performance beyond a measure of general cognitive ability. These results suggest that CT can account for real-world outcomes and goes beyond general cognitive ability to account for additional variance.

Some theorists maintain that standardized intelligence tests do not capture the variety of abilities that people need to adapt well in the real world. For example, Gardner ( 1999 ), has proposed that additional forms of intelligence are needed, such as spatial, musical, and interpersonal intelligences in addition to linguistic and logical–mathematical intelligences, more typically associated with general cognitive ability and academic success. In other theorizing, Sternberg ( 1988 ) has proposed three additional types of intelligence: analytical, practical, and creative intelligence, to more fully capture the variety of intelligent abilities on which people differ. Critical thinking is considered part of analytical skills which involve evaluating the quality and applicability of ideas, products, and options ( Sternberg 2022 ). Regarding adaptive intelligence, Sternberg ( 2019 ) has emphasized how adaptive aspects of intelligence are needed to solve real-world problems both at the individual and species levels. According to Sternberg, core components of intelligence have evolved in humans, but intelligence takes different forms in different cultures, with each culture valuing its own skills for adaptation. Thus, the construct of intelligence must go beyond core cognitive ability to encompass the specific abilities needed for adaptive behavior in specific cultures and settings.

Two other theories propose that other components be added to intelligent and rational thinking. Ackerman ( 2022 ) has emphasized the importance of acquiring domain-specific knowledge for engaging in intelligent functioning in the wide variety of tasks found in everyday life. Ackerman has argued that declarative, procedural, and tacit knowledge, as well as non-ability variables, are needed to better predict job performance and performance of other everyday activities. Taking another approach, Halpern and Dunn ( 2021 ) have proposed that critical thinking is essentially the adaptive application of intelligence for solving real-world problems. Elsewhere, Butler and Halpern ( 2019 ) have argued that dispositions such as open-mindedness are another aspect of CT and that domain-specific knowledge and specific CT skills are needed to solve real-world problems.

Examples are readily available for how CT goes beyond what IQ tests test to include specific rules for reasoning and relevant knowledge needed to execute real-world tasks. Take the example of scientific reasoning, which can be viewed as a specialized form of CT. Drawing a well-reasoned inductive conclusion about a theory or analyzing the quality of a research study both require that a thinker possess relevant specialized knowledge related to the question and specific reasoning skills for reasoning about scientific methodology. In contrast, IQ tests are deliberately designed to be nonspecialized in assessing Gc, broadly sampling vocabulary and general knowledge in order to be fair and unbiased ( Stanovich 2009 ). Specialized knowledge and reasoning skills are also needed in non-academic domains. Jurors must possess specialized knowledge to understand expert, forensic testimony and specific reasoning skills to interpret the law and make well-reasoned judgments about a defendant’s guilt or innocence.

Besides lacking specific reasoning skills and domain-relevant knowledge, people may fail to think critically because they are not disposed to use their reasoning skills to examine such claims and want to preserve their favored beliefs. Critical thinking dispositions are attitudes or traits that make it more likely that a person will think critically. Theorists have proposed numerous CT dispositions (e.g., Bensley 2018 ; Butler and Halpern 2019 ; Dwyer 2017 ; Ennis 1987 ). Some commonly identified CT dispositions especially relevant to this discussion are open-mindedness, skepticism, intellectual engagement, and the tendency to take a reflective, rational–analytic approach. Critical thinking dispositions are clearly value-laden and prescriptive. A good thinker should be open-minded, skeptical, reflective, intellectually engaged, and value a rational–analytic approach to inquiry. Conversely, corresponding negative dispositions, such as “close-mindedness” and “gullibility”, could obstruct CT.

Without the appropriate disposition, individuals will not use their reasoning skills to think critically about questions. For example, the brilliant mystery writer, Sir Arthur Conan Doyle, who was trained as a physician and created the hyper-reasonable detective Sherlock Holmes, was not disposed to think critically about some unsubstantiated claims. Conan Doyle was no doubt highly intelligent in cognitive ability terms, but he was not sufficiently skeptical (disposed to think critically) about spiritualism. He believed that he was talking to his dearly departed son though a medium, despite the warnings of his magician friend, Harry Houdini, who told him that mediums used trickery in their seances. Perhaps influenced by his Irish father’s belief in the “wee folk”, Conan Doyle also believed that fairies inhabited the English countryside, based on children’s photos, despite the advice of experts who said the photos could be faked. Nevertheless, he was skeptical of a new theory of tuberculosis proposed by Koch when he reported on it, despite his wife suffering from the disease. So, in professional capacities, Conan Doyle used his CT skills, but in certain other domains for which he was motivated to accept unsubstantiated claims, he failed to think critically, insufficiently disposed to skeptically challenge certain implausible claims.

This example makes two important points. Conan Doyle’s superior intelligence was not enough for him to reject implausible claims about the world. In general, motivated reasoning can lead people, even those considered highly intelligent, to accept claims with no good evidentiary support. The second important point is that we would not be able to adequately explain cases like this one, considering only the person’s intelligence or even their reasoning skills, without also considering the person’s disposition. General cognitive ability alone is not sufficient, and CT dispositions should also be considered.

Supporting this conclusion, Stanovich and West ( 1997 ) examined the influence of dispositions beyond the contribution of cognitive ability on a CT task. They gave college students an argument evaluation test in which participants first rated their agreement with several claims about real social and political issues made by a fictitious person. Then, they gave them evidence against each claim and finally asked them to rate the quality of a counterargument made by the same fictitious person. Participants’ ratings of the counterarguments were compared to the median ratings of expert judges on the quality of the rebuttals. Stanovich and West also administered a new measure of rational disposition called the Actively Open-minded Thinking (AOT) scale and the SAT as a proxy for cognitive ability. The AOT was a composite of items from several other scales that would be expected to measure CT disposition. They found that both SAT and AOT scores were significant predictors of higher argument analysis scores. Even after partialing out cognitive ability, actively open-minded thinking was significant. These results suggest that general cognitive ability alone was not sufficient to account for thinking critically about real-world issues and that CT disposition was needed to go beyond it.

Further examining the roles of CT dispositions and cognitive ability on reasoning, Stanovich and West ( 2008 ) studied myside bias, a bias in reasoning closely related to one-sided thinking and confirmation bias. A critical thinker would be expected to not show myside bias and instead fairly evaluate evidence on all sides of a question. Stanovich and West ( 2007 ) found that college students often showed myside bias when asked their opinions about real-world policy issues, such as those concerning the health risks of smoking and drinking alcohol. For example, compared to non-smokers, smokers judged the health risks of smoking to be lower. When they divided participants into higher versus lower cognitive ability groups based on SAT scores, the two groups showed little difference on myside bias. Moreover, on the hazards of drinking issue, participants who drank less had higher scores on the CT disposition measure.

Other research supports the need for both reasoning ability and CT disposition in predicting outcomes in the real world. Ren et al. ( 2020 ) found that CT disposition, as measured by a Chinese critical thinking disposition inventory, and a CT skill measure together contributed a significant amount of the variance in predicting academic performance beyond the contribution of cognitive ability alone, as measured by a test of fluid intelligence. Further supporting the claim that CT requires both cognitive ability and CT disposition, Ku and Ho ( 2010 ) found that a CT disposition measure significantly predicted scores on a CT test beyond the significant contribution of verbal intelligence in high school and college students from Hong Kong.

The contribution of dispositions to thinking is related to another way that CT goes beyond the application of general cognitive ability, i.e., by way of the motivation for reasoning. Assuming that all reasoning is motivated ( Kunda 1990 ), then CT is motivated, too, which is implicit within the Halpern and Dunn ( 2021 ) and Ennis ( 1987 ) definitions. Critical thinking is motivated in the sense of being purposeful and directed towards the goal of arriving at an accurate conclusion. For instance, corresponding to pursuit of the goal of accurate reasoning, the CT disposition of “truth-seeking” guides a person towards reaching the CT goal of arriving at an accurate conclusion.

Also, according to Kunda ( 1990 ), a second type of motivated reasoning can lead to faulty conclusions, often by directing a person towards the goal of maintaining favored beliefs and preconceptions, as in illusory correlation, belief perseverance, and confirmation bias. Corresponding to this second type, negative dispositions, such as close-mindedness and self-serving motives, can incline thinkers towards faulty conclusions. This is especially relevant in the present discussion because poorer reasoning, thinking errors, and the inappropriate use of heuristics are related to the endorsement of unsubstantiated claims, all of which are CT failures. The term “thinking errors” is a generic term referring to logical fallacies, informal reasoning fallacies, argumentation errors, and inappropriate uses of cognitive heuristics ( Bensley 2018 ). Heuristics are cognitive shortcuts, commonly used to simplify judgment tasks and reduce mental effort. Yet, when used inappropriately, heuristics often result in biased judgments.

Stanovich ( 2009 ) has argued that IQ tests do not test people’s use of heuristics, but heuristics have been found to be negatively correlated with CT performance ( West et al. 2008 ). In this same study, they found that college students’ cognitive ability, as measured by performance on the SAT, was not correlated with thinking biases associated with use of heuristics. Although Stanovich and West ( 2008 ) found that susceptibility to biases, such as the conjunction fallacy, framing effect, base-rate neglect, affect bias, and myside bias were all uncorrelated with cognitive ability (using SAT as a proxy), other types of thinking errors were correlated with SAT.

Likewise, two types of knowledge are related to the two forms of motivated reasoning. For instance, inaccurate knowledge, such as misconceptions, can derail reasoning from moving towards a correct conclusion, as in when a person reasons from false premises. In contrast, reasoning from accurate knowledge is more likely to produce an accurate conclusion. Taking into account inaccurate knowledge and thinking errors is important to understanding the endorsement of unsubstantiated claims because these are also related to negative dispositions, such as close-mindedness and cynicism, none of which are measured by intelligence tests.

Critical thinking questions are often situated in real-world examples or in simulations of them which are designed to detect thinking errors and bias. As described in Halpern and Butler ( 2018 ), an item like one on the “Halpern Critical Thinking Assessment” (HCTA) provides respondents with a mock newspaper story about research showing that first-graders who attended preschool were better able to learn how to read. Then the question asks if preschool should be made mandatory. A correct response to this item requires recognizing that correlation does not imply causation, that is, avoiding a common reasoning error people make in thinking about research implications in everyday life. Another CT skills test, “Analyzing Psychological Statements” (APS) assesses the ability to recognize thinking errors and apply argumentation skills and psychology to evaluate psychology-related examples and simulations of real-life situations ( Bensley 2021 ). For instance, besides identifying thinking errors in brief samples of thinking, questions ask respondents to distinguish arguments from non-arguments, find assumptions in arguments, evaluate kinds of evidence, and draw a conclusion from a brief psychological argument. An important implication of the studies just reviewed is that efforts to understand CT can be further informed by assessing thinking errors and biases, which, as the next discussion shows, are related to individual differences in thinking dispositions and cognitive style.

4. Dual-Process Theory Measures and Unsubstantiated Beliefs

Dual-process theory (DPT) and measures associated with it have been widely used in the study of the endorsement of unsubstantiated beliefs, especially as they relate to cognitive style. According to a cognitive style version of DPT, people have two modes of processing, a fast intuitive–experiential (I-E) style of processing and a slower, reflective, rational–analytic (R-A) style of processing. The intuitive cognitive style is associated with reliance on hunches, feelings, personal experience, and cognitive heuristics which simplify processing, while the R-A cognitive style is a reflective, rational–analytic style associated with more elaborate and effortful processing ( Bensley et al. 2022 ; Epstein 2008 ). As such, the rational–analytic cognitive style is consistent with CT dispositions, such as those promoting the effortful analysis of evidence, objective truth, and logical consistency. In fact, CT is sometimes referred to as “critical-analytic” thinking ( Byrnes and Dunbar 2014 ) and has been associated with analytical intelligence Sternberg ( 1988 ) and with rational thinking, as discussed before.

People use both modes of processing, but they show individual differences in which mode they tend to rely upon, although the intuitive–experiential mode is the default ( Bensley et al. 2022 ; Morgan 2016 ; Pacini and Epstein 1999 ), and they accept unsubstantiated claims differentially based on their predominate cognitive style ( Bensley et al. 2022 ; Epstein 2008 ). Specifically, individuals who rely more on an I-E cognitive style tend to endorse unsubstantiated claims more strongly, while individuals who rely more on a R-A cognitive style tend to endorse those claims less. Note, however, that other theorists view the two processes and cognitive styles somewhat differently, (e.g., Kahneman 2011 ; Stanovich et al. 2018 ).

Researchers have often assessed the contribution of these two cognitive styles to endorsement of unsubstantiated claims, using variants of three measures: the Cognitive Reflection Test (CRT) of Frederick ( 2005 ), the Rational–Experiential Inventory of Epstein and his colleagues ( Pacini and Epstein 1999 ), and the related Need for Cognition scale of Cacioppo and Petty ( 1982 ). The CRT is a performance-based test which asks participants to solve problems that appear to require simple mathematical calculations, but which actually require more reflection. People typically do poorly on the CRT, which is thought to indicate reliance on an intuitive cognitive style, while better performance is thought to indicate reliance on the slower, more deliberate, and reflective cognitive style. The positive correlation of the CRT with numeracy scores suggests it also has a cognitive skill component ( Patel et al. 2019 ). The Rational–Experiential Inventory (REI) of Pacini and Epstein ( 1999 ) contains one scale designed to measure an intuitive–experiential cognitive style and a second scale intended to measure a rational–analytic (R-A) style. The R-A scale was adapted from the Need for Cognition (NFC) scale of Cacioppo and Petty ( 1982 ), another scale associated with rational–analytic thinking and expected to be negatively correlated with unsubstantiated beliefs. The NFC was found to be related to open-mindedness and intellectual engagement, two CT dispositions ( Cacioppo et al. 1996 ).

The cognitive styles associated with DPT also relate to CT dispositions. Thinking critically requires that individuals be disposed to use their reasoning skills to reject unsubstantiated claims ( Bensley 2018 ) and that they be inclined to take a rational–analytic approach rather than relying on their intuitions and feelings. For instance, Bensley et al. ( 2014 ) found that students who endorsed more psychological misconceptions adopted a more intuitive cognitive style, were less disposed to take a rational–scientific approach to psychology, and scored lower on a psychological critical thinking skills test. Further supporting this connection, West et al. ( 2008 ) found that participants who tended to use cognitive heuristics more, thought to be related to intuitive processing and bias, scored lower on a critical thinking measure. As the Bensley et al. ( 2014 ) results suggest, in addition to assessing reasoning skills and dispositions, comprehensive CT assessment research should assess knowledge and unsubstantiated beliefs because these are related to failures of critical thinking.

5. Assessing Critical Thinking and Unsubstantiated Beliefs

Assessing endorsement of unsubstantiated claims provides another way to assess CT outcomes related to everyday thinking, which goes beyond what intelligence tests test ( Bensley and Lilienfeld 2020 ). From the perspective of the multi-dimensional model of CT, endorsement of unsubstantiated claims could result from deficiencies in a person’s CT reasoning skills, a lack of relevant knowledge, and in the engagement of inappropriate dispositions. Suppose an individual endorses an unsubstantiated claim, such as believing the conspiracy theory that human-caused global warming is a hoax. The person may lack the specific reasoning skills needed to critically evaluate the conspiracy. Lantian et al. ( 2020 ) found that scores on a CT skills test were negatively correlated with conspiracy theory beliefs. The person also must possess relevant scientific knowledge, such as knowing the facts that each year humans pump about 40 billion metric tons of carbon dioxide into the atmosphere and that carbon dioxide is a greenhouse gas which traps heat in the atmosphere. Or, the person may not be scientifically skeptical or too cynical or mistrustful of scientists or governmental officials.

Although endorsing unsubstantiated beliefs is clearly a failure of CT, problems arise in deciding which ones are unsubstantiated, especially when considering conspiracy theories. Typically, the claims which critical thinkers should reject as unsubstantiated are those which are not supported by objective evidence. But of the many conspiracies proposed, few are vigorously examined. Moreover, some conspiracy theories which authorities might initially deny turn out to be real, such as the MK-Ultra theory that the CIA was secretly conducting mind-control research on American citizens.

A way out of this quagmire is to define unsubstantiated beliefs on a continuum which depends on the quality of evidence. This has led to the definition of unsubstantiated claims as assertions which have not been supported by high-quality evidence ( Bensley 2023 ). Those which are supported have the kind of evidentiary support that critical thinkers are expected to value in drawing reasonable conclusions. Instead of insisting that a claim must be demonstrably false to be rejected, we adopt a more tentative acceptance or rejection of claims, based on how much good evidence supports them. Many claims are unsubstantiated because they have not yet been carefully examined and so totally lack support or they may be supported only by low quality evidence such as personal experience, anecdotes, or non-scientific authority. Other claims are more clearly unsubstantiated because they contradict the findings of high-quality research. A critical thinker should be highly skeptical of these.

Psychological misconceptions are one type of claim that can be more clearly unsubstantiated. Psychological misconceptions are commonsense psychological claims (folk theories) about the mind, brain, and behavior that are contradicted by the bulk of high-quality scientific research. Author developed the Test of Psychological Knowledge and Misconceptions (TOPKAM), a 40-item, forced-choice measure with each item posing a statement of a psychological misconception and the other response option stating the evidence-based alternative ( Bensley et al. 2014 ). They found that higher scores on the APS, the argument analysis test applying psychological concepts to analyze real-world examples, were associated with more correct answers on the TOPKAM. Other studies have found positive correlations between CT skills tests and other measures of psychological misconceptions ( McCutcheon et al. 1992 ; Kowalski and Taylor 2004 ). Bensley et al. ( 2014 ) also found that higher correct TOPKAM scores were positively correlated with scores on the Inventory of Thinking Dispositions in Psychology (ITDP) of Bensley ( 2021 ), a measure of the disposition to take a rational and scientific approach to psychology but were negatively correlated with an intuitive cognitive style.

Bensley et al. ( 2021 ) conducted a multidimensional study, assessing beginner psychology students starting a CT course on their endorsement of psychological misconceptions, recognition of thinking errors, CT dispositions, and metacognition, before and after CT instruction. Two classes received explicit instruction involving considerable practice in argument analysis and scientific reasoning skills, with one class receiving CT instruction focused more on recognizing psychological misconceptions and a second class focused more on recognizing various thinking errors. Bensley et al. assessed both classes before and after instruction on the TOPKAM and on the Test of Thinking Errors, a test of the ability to recognize in real-world examples 17 different types of thinking errors, such as confirmation bias, inappropriate use of the availability and representativeness heuristics, reasoning from ignorance/possibility, gambler’s fallacy, and hasty generalization ( Bensley et al. 2021 ). Correct TOPKAM and TOTE scores were positively correlated, and after CT instruction both were positively correlated with the APS, the CT test of argument analysis skills.

Bensley et al. found that after explicit instruction of CT skills, students improved significantly on both the TOPKAM and TOTE, but those focusing on recognizing misconceptions improved the most. Also, those students who improved the most on the TOTE scored higher on the REI rational–analytic scale and on the ITDP, while those improving the most on the TOTE scored higher on the ITDP. The students receiving explicit CT skill instruction in recognizing misconceptions also significantly improved the accuracy of their metacognitive monitoring in estimating their TOPKAM scores after instruction.

Given that before instruction neither class differed in GPA nor on the SAT, a proxy for general cognitive ability, CT instruction provided a good accounting for the improvement in recognition of thinking errors and misconceptions without recourse to intelligence. However, SAT scores were positively correlated with both TOTE scores and APS scores, suggesting that cognitive ability contributed to CT skill performance. These results replicated the earlier findings of Bensley and Spero ( 2014 ) showing that explicit CT instruction improved performance on both CT skills tests and metacognitive monitoring accuracy while controlling for SAT, which was positively correlated with the CT skills test performance.

Taken together, these findings suggest that cognitive ability contributes to performance on CT tasks but that CT instruction goes beyond it to further improve performance. As the results of Bensley et al. ( 2021 ) show, and as discussed next, thinking errors and bias from heuristics are CT failures that should also be assessed because they are related to endorsement of unsubstantiated beliefs and cognitive style.

6. Dual-Processing Theory and Research on Unsubstantiated Beliefs

Consistent with DPT, numerous other studies have obtained significant positive correlations between intuitive cognitive style and paranormal belief, often using the REI intuitive–experiential scale and the Revised Paranormal Belief Scale (RPBS) of Tobacyk ( 2004 ) (e.g., Genovese 2005 ; Irwin and Young 2002 ; Lindeman and Aarnio 2006 ; Pennycook et al. 2015 ; Rogers et al. 2018 ; Saher and Lindeman 2005 ). Studies have also found positive correlations between superstitious belief and intuitive cognitive style (e.g., Lindeman and Aarnio 2006 ; Maqsood et al. 2018 ). REI intuitive–experiential thinking style was also positively correlated with belief in complementary and alternative medicine ( Lindeman 2011 ), conspiracy theory belief ( Alper et al. 2020 ), and with endorsement of psychological misconceptions ( Bensley et al. 2014 ; Bensley et al. 2022 ).

Additional evidence for DPT has been found when REI R-A and NFC scores were negatively correlated with scores on measures of unsubstantiated beliefs, but studies correlating them with measures of paranormal belief and conspiracy theory belief have shown mixed results. Supporting a relationship, REI rational–analytic and NFC scores significantly and negatively predicted paranormal belief ( Lobato et al. 2014 ; Pennycook et al. 2012 ). Other studies have also obtained a negative correlation between NFC and paranormal belief ( Lindeman and Aarnio 2006 ; Rogers et al. 2018 ; Stahl and van Prooijen 2018 ), but both Genovese ( 2005 ) and Pennycook et al. ( 2015 ) found that NFC was not significantly correlated with paranormal belief. Swami et al. ( 2014 ) found that although REI R-A scores were negatively correlated with conspiracy theory belief, NFC scores were not.

Researchers often refer to people who are doubtful of paranormal and other unfounded claims as “skeptics” and so have tested whether measures related to skepticism are associated with less endorsement of unsubstantiated claims. They typically view skepticism as a stance towards unsubstantiated claims taken by rational people who reject them, (e.g., Lindeman and Aarnio 2006 ; Stahl and van Prooijen 2018 ), rather than as a disposition inclining a person to think critically about unsubstantiated beliefs ( Bensley 2018 ).

Fasce and Pico ( 2019 ) conducted one of the few studies using a measure related to skeptical disposition, the Critical Thinking Disposition Scale (CTDS) of Sosu ( 2013 ), in relation to endorsement of unsubstantiated claims. They found that scores on the CTDS were negatively correlated with scores on the RPBS but not significantly correlated with either a measure of pseudoscience or of conspiracy theory belief. However, the CRT was negatively correlated with both RPBS and the pseudoscience measure. Because Fasce and Pico ( 2019 ) did not examine correlations with the Reflective Skepticism subscale of the CTDS, its contribution apart from full-scale CTDS was not found.

To more directly test skepticism as a disposition, we recently assessed college students on how well three new measures predicted endorsement of psychological misconceptions, paranormal claims, and conspiracy theories ( Bensley et al. 2022 ). The dispositional measures included a measure of general skeptical attitude; a second measure, the Scientific Skepticism Scale (SSS), which focused more on waiting to accept claims until high-quality scientific evidence supported them; and a third measure, the Cynicism Scale (CS), which focused on doubting the sincerity of the motives of scientists and people in general. We found that although the general skepticism scale did not predict any of the unsubstantiated belief measures, SSS scores were a significant negative predictor of both paranormal belief and conspiracy theory belief. REI R-A scores were a less consistent negative predictor, while REI I-E scores were more consistent positive predictors, and surprisingly CS scores were the most consistent positive predictors of the unsubstantiated beliefs.

Researchers commonly assume that people who accept implausible, unsubstantiated claims are gullible or not sufficiently skeptical. For instance, van Prooijen ( 2019 ) has argued that conspiracy theory believers are more gullible (less skeptical) than non-believers and tend to accept unsubstantiated claims more than less gullible people. van Prooijen ( 2019 ) reviewed several studies supporting the claim that people who are more gullible tend to endorse conspiracy theories more. However, he did not report any studies in which a gullible disposition was directly measured.

Recently, we directly tested the gullibility hypothesis in relation to scientific skepticism ( Bensley et al. 2023 ) using the Gullibility Scale of Teunisse et al. ( 2019 ) on which people skeptical of the paranormal had been shown to have lower scores. We found that Gullibility Scale and the Cynicism Scale scores were positively correlated, and both were significant positive predictors of unsubstantiated beliefs, in general, consistent with an intuitive–experiential cognitive style. In contrast, we found that scores on the Cognitive Reflection Test, the Scientific Skepticism Scale, and the REI rational–analytic scale were all positively intercorrelated and significant negative predictors of unsubstantiated beliefs, in general, consistent with a rational–analytic/reflective cognitive style. Scientific skepticism scores negatively predicted general endorsement of unsubstantiated claims beyond the REI R-A scale, but neither the CTDS nor the CTDS Reflective Skepticism subscale were significant. These results replicated findings from the Bensley et al. ( 2023 ) study and supported an elaborated dual-process model of unsubstantiated belief. The SSS was not only a substantial negative predictor, it was also negatively correlated with the Gullibility Scale, as expected.

These results suggest that both CT-related dispositions and CT skills are related to endorsement of unsubstantiated beliefs. However, a measure of general cognitive ability or intelligence must be examined along with measures of CT and unsubstantiated beliefs to determine if CT goes beyond intelligence to predict unsubstantiated beliefs. In one of the few studies that also included a measure of cognitive ability, Stahl and van Prooijen ( 2018 ) found that dispositional characteristics helped account for acceptance of conspiracies and paranormal belief beyond cognitive ability. Using the Importance of Rationality Scale (IRS), a rational–analytic scale designed to measure skepticism towards unsubstantiated beliefs, Stahl and van Prooijen ( 2018 ) found that the IRS was negatively correlated with paranormal belief and belief in conspiracy theories. In separate hierarchical regressions, cognitive ability was the strongest negative predictor of both paranormal belief and of conspiracy belief, but IRS scores in combination with cognitive ability negatively predicted endorsement of paranormal belief but did not significantly predict conspiracy theory belief. These results provided partial support that that a measure of rational–analytic cognitive style related to skeptical disposition added to the variance accounted for beyond cognitive ability in negatively predicting unsubstantiated belief.

In another study that included a measure of cognitive ability, Cavojova et al. ( 2019 ) examined how CT-related dispositions and the Scientific Reasoning Scale (SRS) were related to a measure of paranormal, pseudoscientific, and conspiracy theory beliefs. The SRS of Drummond and Fischhoff ( 2017 ) likely measures CT skill in that it measures the ability to evaluate scientific research and evidence. As expected, the unsubstantiated belief measure was negatively correlated with the SRS and a cognitive ability measure, similar to Raven’s Progressive Matrices. Unsubstantiated beliefs were positively correlated with dogmatism (the opposite of open-mindedness) but not with REI rational–analytic cognitive style. The SRS was a significant negative predictor of both unsubstantiated belief and susceptibility to bias beyond the contribution of cognitive ability, but neither dogmatism nor analytic thinking were significant predictors. Nevertheless, this study provides some support that a measure related to CT reasoning skill accounts for variance in unsubstantiated belief beyond cognitive ability.

The failure of this study to show a correlation between rational–analytic cognitive style and unsubstantiated beliefs, when some other studies have found significant correlations with it and related measures, has implications for the multidimensional assessment of unsubstantiated beliefs. One implication is that the REI rational–analytic scale may not be a strong predictor of unsubstantiated beliefs. In fact, we have recently found that the Scientific Skepticism Scale was a stronger negative predictor ( Bensley et al. 2022 ; Bensley et al. 2023 ), which also suggests that other measures related to rational–analytic thinking styles should be examined. This could help triangulate the contribution of self-report cognitive style measures to endorsement of unsubstantiated claims, recognizing that the use of self-report measures has a checkered history in psychological research. A second implication is that once again, measures of critical thinking skill and cognitive ability were negative predictors of unsubstantiated belief and so they, too, should be included in future assessments of unsubstantiated beliefs.

7. Discussion

This review provided different lines of evidence supporting the claim that CT goes beyond cognitive ability in accounting for certain real-world outcomes. Participants who think critically reported fewer problems in everyday functioning, not expected to befall critical thinkers. People who endorsed unsubstantiated claims less showed better CT skills, more accurate domain-specific knowledge, less susceptibility to thinking errors and bias, and were more disposed to think critically. More specifically, they tended to be more scientifically skeptical and adopt a more rational–analytic cognitive style. In contrast, those who endorsed them more tended to be more cynical and adopt an intuitive–experiential cognitive style. These characteristics go beyond what standardized intelligence tests test. In some studies, the CT measures accounted for additional variance beyond the variance contributed by general cognitive ability.

That is not to say that measures of general cognitive ability are not useful. As noted by Gottfredson ( 2004 ), “g” is a highly successful predictor of academic and job performance. More is known about g and Gf than about many other psychological constructs. On average, g is closely related to Gf, which is highly correlated with working memory ( r = 0.70) and can be as high as r = 0.77 ( r 2 = 0.60) based on a correlated two-factor model ( Gignac 2014 ). Because modern working memory theory is, itself, a powerful theory ( Chai et al. 2018 ), this lends construct validity to the fluid intelligence construct. Although cognitive scientists have clearly made progress in understanding the executive processes underlying intelligence, they have not yet identified the specific cognitive components of intelligence ( Sternberg 2022 ). Moreover, theorists have acknowledged that intelligence must also include components beyond g, including domain-specific knowledge ( Ackerman 2022 ; Conway and Kovacs 2018 ) which are not yet clearly understood,

This review also pointed to limitations in the research that should be addressed. So far, not only have few studies of unsubstantiated beliefs included measures of intelligence, but they have also often used proxies for intelligence test scores, such as SAT scores. Future studies, besides using more and better measures of intelligence, could benefit from inclusion of more specifically focused measures, such as measures of Gf and Gc. Also, more research should be carried out to develop additional high-quality measures of CT, including ones that assess specific reasoning skills and knowledge relevant to thinking about a subject, which could help resolve perennial questions about the domain-general versus domain-specific nature of intelligence and CT. Overall, the results of this review encourage taking a multidimensional approach to investigating the complex constructs of intelligence, CT, and unsubstantiated belief. Supporting these recommendations were results of studies in which the improvement accrued from explicit CT skill instruction could be more fully understood when CT skills, relevant knowledge, CT dispositions, metacognitive monitoring accuracy, and a proxy for intelligence were used.

8. Conclusions

Critical thinking, broadly conceived, offers ways to understand real-world outcomes of thinking beyond what general cognitive ability can provide and intelligence tests test. A multi-dimensional view of CT which includes specific reasoning and metacognitive skills, CT dispositions, and relevant knowledge can add to our understanding of why some people endorse unsubstantiated claims more than others do, going beyond what intelligence tests test. Although general cognitive ability and domain-general knowledge often contribute to performance on CT tasks, thinking critically about real-world questions also involves applying rules, criteria, and knowledge which are specific to the question under consideration, as well as the appropriate dispositions and cognitive styles for deploying these.

Despite the advantages of taking this multidimensional approach to CT in helping us to more fully understand everyday thinking and irrationality, it presents challenges for researchers and instructors. It implies the need to assess and instruct multidimensionally, including not only measures of reasoning skills but also addressing thinking errors and biases, dispositions, the knowledge relevant to a task, and the accuracy of metacognitive judgments. As noted by Dwyer ( 2023 ), adopting a more complex conceptualization of CT beyond just skills is needed, but it presents challenges for those seeking to improve students’ CT. Nevertheless, the research reviewed suggests that taking this multidimensional approach to CT can enhance our understanding of the endorsement of unsubstantiated claims beyond what standardized intelligence tests contribute. More research is needed to resolve remaining controversies and to develop evidence-based applications of the findings.

Funding Statement

This research received no external funding.

Institutional Review Board Statement

This research involved no new testing of participants and hence did not require Institutional Review Board approval.

Informed Consent Statement

This research involved no new testing of participants and hence did not require an Informed Consent Statement.

Data Availability Statement

Conflicts of interest.

The author declares no conflict of interest.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

  • Ackerman Phillip L. Intelligence … Moving beyond the lowest common denominator. American Psychologist. 2022; 78 :283–97. doi: 10.1037/amp0001057. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Alper Sinan, Bayrak Faith, Yilmaz Onurcan. Psychological correlates of COVID-19 conspiracy beliefs and preventive measures: Evidence from Turkey. Current Psychology. 2020; 40 :5708–17. doi: 10.1007/s12144-020-00903-0. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bensley D. Alan. Critical Thinking in Psychology and Everyday Life: A Guide to Effective Thinking. Worth Publishers; New York: 2018. [ Google Scholar ]
  • Bensley D. Alan. The Critical Thinking in Psychology Assessment Battery (CTPAB) and Test Guide. 2021. Unpublished manuscript. Frostburg State University, Frostburg, MD, USA.
  • Bensley D. Alan. “I can’t believe you believe that”: Identifying unsubstantiated claims. Skeptical Inquirer. 2023; 47 :53–56. [ Google Scholar ]
  • Bensley D. Alan, Spero Rachel A. Improving critical thinking skills and metacognitive monitoring through direct infusion. Thinking Skills and Creativity. 2014; 12 :55–68. doi: 10.1016/j.tsc.2014.02.001. [ CrossRef ] [ Google Scholar ]
  • Bensley D. Alan, Lilienfeld Scott O. Assessment of Unsubstantiated Beliefs. Scholarship of Teaching and Learning in Psychology. 2020; 6 :198–211. doi: 10.1037/stl0000218. [ CrossRef ] [ Google Scholar ]
  • Bensley D. Alan, Masciocchi Christopher M., Rowan Krystal A. A comprehensive assessment of explicit critical thinking instruction on recognition of thinking errors and psychological misconceptions. Scholarship of Teaching and Learning in Psychology. 2021; 7 :107. doi: 10.1037/stl0000188. [ CrossRef ] [ Google Scholar ]
  • Bensley D. Alan, Watkins Cody, Lilienfeld Scott O., Masciocchi Christopher, Murtagh Michael, Rowan Krystal. Skepticism, cynicism, and cognitive style predictors of the generality of unsubstantiated belief. Applied Cognitive Psychology. 2022; 36 :83–99. doi: 10.1002/acp.3900. [ CrossRef ] [ Google Scholar ]
  • Bensley D. Alan, Rodrigo Maria, Bravo Maria, Jocoy Kathleen. Dual-Process Theory and Cognitive Style Predictors of the General Endorsement of Unsubstantiated Claims. 2023. Unpublished manuscript. Frostburg State University, Frostburg, MD, USA.
  • Bensley D. Alan, Lilienfeld Scott O., Powell Lauren. A new measure of psychological. misconceptions: Relations with academic background, critical thinking, and acceptance of paranormal and pseudoscientific claims. Learning and Individual Differences. 2014; 36 :9–18. doi: 10.1016/j.lindif.2014.07.009. [ CrossRef ] [ Google Scholar ]
  • Bierwiaczonek Kinga, Kunst Jonas R., Pich Olivia. Belief in COVID-19 conspiracy theories reduces social distancing over time. Applied Psychology Health and Well-Being. 2020; 12 :1270–85. doi: 10.1111/aphw.12223. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Butler Heather A. Halpern critical thinking assessment predicts real-world outcomes of critical thinking. Applied Cognitive Psychology. 2012; 26 :721–29. doi: 10.1002/acp.2851. [ CrossRef ] [ Google Scholar ]
  • Butler Heather A., Halpern Diane F. Is critical thinking a better model of intelligence? In: Sternberg Robert J., editor. The Nature of Intelligence. Cambridge University Press; Cambridge: 2019. pp. 183–96. [ Google Scholar ]
  • Butler Heather A., Pentoney Christopher, Bong Maebelle P. Predicting real-world outcomes: Critical thinking ability is a better predictor of life decisions than intelligence. Thinking Skills and Creativity. 2017; 25 :38–46. doi: 10.1016/j.tsc.2017.06.005. [ CrossRef ] [ Google Scholar ]
  • Byrnes James P., Dunbar Kevin N. The nature and development of critical-analytic thinking. Educational Research Review. 2014; 26 :477–93. doi: 10.1007/s10648-014-9284-0. [ CrossRef ] [ Google Scholar ]
  • Cacioppo John T., Petty Richard E. The need for cognition. Journal of Personality and Social Psychology. 1982; 42 :116–31. doi: 10.1037/0022-3514.42.1.116. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cacioppo John T., Petty Richard E., Feinstein Jeffrey A., Jarvis W. Blair G. Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin. 1996; 119 :197. doi: 10.1037/0033-2909.119.2.197. [ CrossRef ] [ Google Scholar ]
  • Cavojova Vladimira, Srol Jakub, Jurkovic Marek. Why we should think like scientists? Scientific reasoning and susceptibility to epistemically suspect beliefs and cognitive biases. Applied Cognitive Psychology. 2019; 34 :85–95. doi: 10.1002/acp.3595. [ CrossRef ] [ Google Scholar ]
  • Chai Wen Jia, Hamid Abd, Ismafairus Aini, Abdullah Jafri Malin. Working memory from the psychological and neuroscience perspective. Frontiers in Psychology. 2018; 9 :401. doi: 10.3389/fpsyg.2018.00401. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Conway Andrew R., Kovacs Kristof. The nature of the general factor of intelligence. In: Sternberg Robert J., editor. The Nature of Human Intelligence. Cambridge University Press; Cambridge: 2018. pp. 49–63. [ Google Scholar ]
  • Drummond Caitlin, Fischhoff Baruch. Development and validation of the Scientific Reasoning Scale. Journal of Behavioral Decision Making. 2017; 30 :26–38. doi: 10.1002/bdm.1906. [ CrossRef ] [ Google Scholar ]
  • Dwyer Christopher P. Conceptual Perspectives and Practical Guidelines. Cambridge University Press; Cambridge: 2017. [ Google Scholar ]
  • Dwyer Christopher P. An evaluative review of barriers to critical thinking in educational and real-world settings. Journal of Intelligence. 2023; 11 :105. doi: 10.3390/jintelligence11060105. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ennis Robert H. A taxonomy of critical thinking dispositions and abilities. In: Baron Joan, Sternberg Robert., editors. Teaching Thinking Skills: Theory and Practice. W. H. Freeman; New York: 1987. [ Google Scholar ]
  • Epstein Seymour. Intuition from the perspective of cognitive-experiential self-theory. In: Plessner Henning, Betsch Tilmann., editors. Intuition in Judgment and Decision Making. Erlbaum; Washington, DC: 2008. pp. 23–37. [ Google Scholar ]
  • Fasce Angelo, Pico Alfonso. Science as a vaccine: The relation between scientific literacy and unwarranted beliefs. Science & Education. 2019; 28 :109–25. doi: 10.1007/s11191-018-00022-0. [ CrossRef ] [ Google Scholar ]
  • Frederick Shane. Cognitive reflection and decision making. Journal of Economic Perspectives. 2005; 19 :25–42. doi: 10.1257/089533005775196732. [ CrossRef ] [ Google Scholar ]
  • Gardner Howard. Intelligence Reframed: Multiple Intelligence for the 21st Century. Basic Books; New York: 1999. [ Google Scholar ]
  • Genovese Jeremy E. C. Paranormal beliefs, schizotypy, and thinking styles among teachers and future teachers. Personality and Individual Differences. 2005; 39 :93–102. doi: 10.1016/j.paid.2004.12.008. [ CrossRef ] [ Google Scholar ]
  • Gignac Gilles E. Fluid intelligence shares closer to 60% of its variance with working memory capacity and is a better indicator of general intelligence. Intelligence. 2014; 47 :122–33. doi: 10.1016/j.intell.2014.09.004. [ CrossRef ] [ Google Scholar ]
  • Gottfredson Linda S. Life, death, and intelligence. Journal of Cognitive Education and Psychology. 2004; 4 :23–46. doi: 10.1891/194589504787382839. [ CrossRef ] [ Google Scholar ]
  • Halpern Diane F., Dunn Dana. Critical thinking: A model of intelligence for solving real-world problems. Journal of Intelligence. 2021; 9 :22. doi: 10.3390/jintelligence9020022. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern Diane F., Butler Heather A. Is critical thinking a better model of intelligence? In: Sternberg Robert J., editor. The Nature of Human Intelligence. Cambridge University Press; Cambridge: 2018. pp. 183–196. [ Google Scholar ]
  • Irwin Harvey J., Young J. M. Intuitive versus reflective processes in the formation of paranormal beliefs. European Journal of Parapsychology. 2002; 17 :45–55. [ Google Scholar ]
  • Jolley Daniel, Paterson Jenny L. Pylons ablaze: Examining the role of 5G COVID-19 conspiracy beliefs and support for violence. British Journal of Social Psychology. 2020; 59 :628–40. doi: 10.1111/bjso.12394. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kahneman Daniel. Thinking Fast and Slow. Farrar, Strauss and Giroux; New York: 2011. [ Google Scholar ]
  • Kowalski Patricia, Taylor Annette J. Ability and critical thinking as predictors of change in students’ psychological misconceptions. Journal of Instructional Psychology. 2004; 31 :297–303. [ Google Scholar ]
  • Ku Kelly Y. L., Ho Irene T. Dispositional Factors predicting Chinese students’ critical thinking performance. Personality and Individual Differences. 2010; 48 :54–58. doi: 10.1016/j.paid.2009.08.015. [ CrossRef ] [ Google Scholar ]
  • Kunda Ziva. The case for motivated reasoning. Psychological Bulletin. 1990; 98 :480–98. doi: 10.1037/0033-2909.108.3.480. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lantian Anthony, Bagneux Virginie, Delouvee Sylvain, Gauvrit Nicolas. Maybe a free thinker but not a critical one: High conspiracy belief is associated with low critical thinking ability. Applied Cognitive Psychology. 2020; 35 :674–84. doi: 10.1002/acp.3790. [ CrossRef ] [ Google Scholar ]
  • Lilienfeld Scott O. Psychological treatments that cause harm. Perspectives on Psychological Science. 2007; 2 :53–70. doi: 10.1111/j.1745-6916.2007.00029.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lindeman Marjaana. Biases in intuitive reasoning and belief in complementary and alternative medicine. Psychology and Health. 2011; 26 :371–82. doi: 10.1080/08870440903440707. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lindeman Marjaana, Aarnio Kia. Paranormal beliefs: Their dimensionality and correlates. European Journal of Personality. 2006; 20 :585–602. [ Google Scholar ]
  • Lobato Emilio J., Mendoza Jorge, Sims Valerie, Chin Matthew. Explaining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance among a university population. Applied Cognitive Psychology. 2014; 28 :617–25. doi: 10.1002/acp.3042. [ CrossRef ] [ Google Scholar ]
  • Maqsood Alisha, Jamil Farhat, Khalid Ruhi. Thinking styles and belief in superstitions: Moderating role of gender in young adults. Pakistan Journal of Psychological Research. 2018; 33 :335–348. [ Google Scholar ]
  • McCutcheon Lynn E., Apperson Jenneifer M., Hanson Esher, Wynn Vincent. Relationships among critical thinking skills, academic achievement, and misconceptions about psychology. Psychological Reports. 1992; 71 :635–39. doi: 10.2466/pr0.1992.71.2.635. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McGrew Kevin S. CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence. 2009; 37 :1–10. doi: 10.1016/j.intell.2008.08.004. [ CrossRef ] [ Google Scholar ]
  • Morgan Jonathan. Religion and dual-process cognition: A continuum of styles or distinct types. Religion, Brain, & Behavior. 2016; 6 :112–29. doi: 10.1080/2153599X.2014.966315. [ CrossRef ] [ Google Scholar ]
  • Nie Fanhao, Olson Daniel V. A. Demonic influence: The negative mental health effects of belief in demons. Journal for the Scientific Study of Religion. 2016; 55 :498–515. doi: 10.1111/jssr.12287. [ CrossRef ] [ Google Scholar ]
  • Pacini Rosemary, Epstein Seymour. The relation of rational and experiential information processing styles to personality, basic beliefs, and the ratio-bias phenomenon. Journal of Personality and Social Psychology. 1999; 76 :972–87. doi: 10.1037/0022-3514.76.6.972. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Patel Niraj, Baker S. Glenn, Scherer Laura D. Evaluating the cognitive reflection test as a measure of intuition/reflection, numeracy, and insight problem solving, and the implications for understanding real-world judgments and beliefs. Journal of Experimental Psychology: General. 2019; 148 :2129–53. doi: 10.1037/xge0000592. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook Gordon, Cheyne James Allen, Barr Nathaniel, Koehler Derek J., Fugelsang Jonathan A. On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making. 2015; 10 :549–63. doi: 10.1017/S1930297500006999. [ CrossRef ] [ Google Scholar ]
  • Pennycook Gordon, Cheyne James Allen, Seti Paul, Koehler Derek J., Fugelsang Jonathan A. Analytic cognitive style predicts religious and paranormal belief. Cognition. 2012; 123 :335–46. doi: 10.1016/j.cognition.2012.03.003. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ren Xuezhu, Tong Yan, Peng Peng, Wang Tengfei. Critical thinking predicts academic performance beyond cognitive ability: Evidence from adults and children. Intelligence. 2020; 82 :10187. doi: 10.1016/j.intell.2020.101487. [ CrossRef ] [ Google Scholar ]
  • Rogers Paul, Fisk John E., Lowrie Emma. Paranormal belief, thinking style preference and susceptibility to confirmatory conjunction errors. Consciousness and Cognition. 2018; 65 :182–95. doi: 10.1016/j.concog.2018.07.013. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saher Marieke, Lindeman Marjaana. Alternative medicine: A psychological perspective. Personality and Individual Differences. 2005; 39 :1169–78. doi: 10.1016/j.paid.2005.04.008. [ CrossRef ] [ Google Scholar ]
  • Sosu Edward M. The development and psychometric validation of a Critical Thinking Disposition Scale. Thinking Skills and Creativity. 2013; 9 :107–19. doi: 10.1016/j.tsc.2012.09.002. [ CrossRef ] [ Google Scholar ]
  • Stahl Tomas, van Prooijen Jan-Wilem. Epistemic irrationality: Skepticism toward unfounded beliefs requires sufficient cognitive ability and motivation to be rational. Personality and Individual Differences. 2018; 122 :155–63. doi: 10.1016/j.paid.2017.10.026. [ CrossRef ] [ Google Scholar ]
  • Stanovich Keith E. What Intelligence Tests Miss: The Psychology of Rational Thought. Yale University Press; New Haven: 2009. [ Google Scholar ]
  • Stanovich Keith E., West Richard F. Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology. 1997; 89 :345–57. doi: 10.1037/0022-0663.89.2.342. [ CrossRef ] [ Google Scholar ]
  • Stanovich Keith E., West Richard F. Natural myside bias is independent of cognitive ability. Thinking & Reasoning. 2007; 13 :225–47. [ Google Scholar ]
  • Stanovich Keith E., West Richard F. On the failure of cognitive ability to predict myside and one-sided thinking bias. Thinking and Reasoning. 2008; 14 :129–67. doi: 10.1080/13546780701679764. [ CrossRef ] [ Google Scholar ]
  • Stanovich Keith E., West Richard F., Toplak Maggie E. The Rationality Quotient: Toward a Test of Rational Thinking. The MIT Press; Cambridge, MA: 2018. [ Google Scholar ]
  • Sternberg Robert J. The Triarchic Mind: A New Theory of Intelligence. Penguin Press; London: 1988. [ Google Scholar ]
  • Sternberg Robert J. A theory of adaptive intelligence and its relation to general intelligence. Journal of Intelligence. 2019; 7 :23. doi: 10.3390/jintelligence7040023. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sternberg Robert J. The search for the elusive basic processes underlying human intelligence: Historical and contemporary perspectives. Journal of Intelligence. 2022; 10 :28. doi: 10.3390/jintelligence10020028. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Swami Viren, Voracek Martin, Stieger Stefan, Tran Ulrich S., Furnham Adrian. Analytic thinking reduces belief in conspiracy theories. Cognition. 2014; 133 :572–85. doi: 10.1016/j.cognition.2014.08.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Teunisse Alessandra K., Case Trevor I., Fitness Julie, Sweller Naomi. I should have known better: Development of a self-report measure of gullibility. Personality and Social Psychology Bulletin. 2019; 46 :408–23. doi: 10.1177/0146167219858641. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tobacyk Jerome J. A revised paranormal belief scale. The International Journal of Transpersonal Studies. 2004; 23 :94–98. doi: 10.24972/ijts.2004.23.1.94. [ CrossRef ] [ Google Scholar ]
  • van der Linden Sander. The conspiracy-effect: Exposure to conspiracy theories (about global warming) leads to decreases pro-social behavior and science acceptance. Personality and Individual Differences. 2015; 87 :173–75. doi: 10.1016/j.paid.2015.07.045. [ CrossRef ] [ Google Scholar ]
  • van Prooijen Jan-Willem. Belief in conspiracy theories: Gullibility or rational skepticism? In: Forgas Joseph P., Baumeister Roy F., editors. The Social Psychology of Gullibility: Fake News, Conspiracy Theories, and Irrational Beliefs. Routledge; London: 2019. pp. 319–32. [ Google Scholar ]
  • Wechsler David. The Measurement of Intelligence. 3rd ed. Williams & Witkins; Baltimore: 1944. [ Google Scholar ]
  • West Richard F., Toplak Maggie E., Stanovich Keith E. Heuristics and biases as measures of critical thinking: Associations with cognitive ability and thinking dispositions. Journal of Educational Psychology. 2008; 100 :930–41. doi: 10.1037/a0012842. [ CrossRef ] [ Google Scholar ]

Warren Berger

A Crash Course in Critical Thinking

What you need to know—and read—about one of the essential skills needed today..

Posted April 8, 2024 | Reviewed by Michelle Quirk

  • In research for "A More Beautiful Question," I did a deep dive into the current crisis in critical thinking.
  • Many people may think of themselves as critical thinkers, but they actually are not.
  • Here is a series of questions you can ask yourself to try to ensure that you are thinking critically.

Conspiracy theories. Inability to distinguish facts from falsehoods. Widespread confusion about who and what to believe.

These are some of the hallmarks of the current crisis in critical thinking—which just might be the issue of our times. Because if people aren’t willing or able to think critically as they choose potential leaders, they’re apt to choose bad ones. And if they can’t judge whether the information they’re receiving is sound, they may follow faulty advice while ignoring recommendations that are science-based and solid (and perhaps life-saving).

Moreover, as a society, if we can’t think critically about the many serious challenges we face, it becomes more difficult to agree on what those challenges are—much less solve them.

On a personal level, critical thinking can enable you to make better everyday decisions. It can help you make sense of an increasingly complex and confusing world.

In the new expanded edition of my book A More Beautiful Question ( AMBQ ), I took a deep dive into critical thinking. Here are a few key things I learned.

First off, before you can get better at critical thinking, you should understand what it is. It’s not just about being a skeptic. When thinking critically, we are thoughtfully reasoning, evaluating, and making decisions based on evidence and logic. And—perhaps most important—while doing this, a critical thinker always strives to be open-minded and fair-minded . That’s not easy: It demands that you constantly question your assumptions and biases and that you always remain open to considering opposing views.

In today’s polarized environment, many people think of themselves as critical thinkers simply because they ask skeptical questions—often directed at, say, certain government policies or ideas espoused by those on the “other side” of the political divide. The problem is, they may not be asking these questions with an open mind or a willingness to fairly consider opposing views.

When people do this, they’re engaging in “weak-sense critical thinking”—a term popularized by the late Richard Paul, a co-founder of The Foundation for Critical Thinking . “Weak-sense critical thinking” means applying the tools and practices of critical thinking—questioning, investigating, evaluating—but with the sole purpose of confirming one’s own bias or serving an agenda.

In AMBQ , I lay out a series of questions you can ask yourself to try to ensure that you’re thinking critically. Here are some of the questions to consider:

  • Why do I believe what I believe?
  • Are my views based on evidence?
  • Have I fairly and thoughtfully considered differing viewpoints?
  • Am I truly open to changing my mind?

Of course, becoming a better critical thinker is not as simple as just asking yourself a few questions. Critical thinking is a habit of mind that must be developed and strengthened over time. In effect, you must train yourself to think in a manner that is more effortful, aware, grounded, and balanced.

For those interested in giving themselves a crash course in critical thinking—something I did myself, as I was working on my book—I thought it might be helpful to share a list of some of the books that have shaped my own thinking on this subject. As a self-interested author, I naturally would suggest that you start with the new 10th-anniversary edition of A More Beautiful Question , but beyond that, here are the top eight critical-thinking books I’d recommend.

The Demon-Haunted World: Science as a Candle in the Dark , by Carl Sagan

This book simply must top the list, because the late scientist and author Carl Sagan continues to be such a bright shining light in the critical thinking universe. Chapter 12 includes the details on Sagan’s famous “baloney detection kit,” a collection of lessons and tips on how to deal with bogus arguments and logical fallacies.

recognizing bias requires critical thinking

Clear Thinking: Turning Ordinary Moments Into Extraordinary Results , by Shane Parrish

The creator of the Farnham Street website and host of the “Knowledge Project” podcast explains how to contend with biases and unconscious reactions so you can make better everyday decisions. It contains insights from many of the brilliant thinkers Shane has studied.

Good Thinking: Why Flawed Logic Puts Us All at Risk and How Critical Thinking Can Save the World , by David Robert Grimes

A brilliant, comprehensive 2021 book on critical thinking that, to my mind, hasn’t received nearly enough attention . The scientist Grimes dissects bad thinking, shows why it persists, and offers the tools to defeat it.

Think Again: The Power of Knowing What You Don't Know , by Adam Grant

Intellectual humility—being willing to admit that you might be wrong—is what this book is primarily about. But Adam, the renowned Wharton psychology professor and bestselling author, takes the reader on a mind-opening journey with colorful stories and characters.

Think Like a Detective: A Kid's Guide to Critical Thinking , by David Pakman

The popular YouTuber and podcast host Pakman—normally known for talking politics —has written a terrific primer on critical thinking for children. The illustrated book presents critical thinking as a “superpower” that enables kids to unlock mysteries and dig for truth. (I also recommend Pakman’s second kids’ book called Think Like a Scientist .)

Rationality: What It Is, Why It Seems Scarce, Why It Matters , by Steven Pinker

The Harvard psychology professor Pinker tackles conspiracy theories head-on but also explores concepts involving risk/reward, probability and randomness, and correlation/causation. And if that strikes you as daunting, be assured that Pinker makes it lively and accessible.

How Minds Change: The Surprising Science of Belief, Opinion and Persuasion , by David McRaney

David is a science writer who hosts the popular podcast “You Are Not So Smart” (and his ideas are featured in A More Beautiful Question ). His well-written book looks at ways you can actually get through to people who see the world very differently than you (hint: bludgeoning them with facts definitely won’t work).

A Healthy Democracy's Best Hope: Building the Critical Thinking Habit , by M Neil Browne and Chelsea Kulhanek

Neil Browne, author of the seminal Asking the Right Questions: A Guide to Critical Thinking, has been a pioneer in presenting critical thinking as a question-based approach to making sense of the world around us. His newest book, co-authored with Chelsea Kulhanek, breaks down critical thinking into “11 explosive questions”—including the “priors question” (which challenges us to question assumptions), the “evidence question” (focusing on how to evaluate and weigh evidence), and the “humility question” (which reminds us that a critical thinker must be humble enough to consider the possibility of being wrong).

Warren Berger

Warren Berger is a longtime journalist and author of A More Beautiful Question .

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience

Back Home

  • Search Search Search …
  • Search Search …

How to Evaluate Sources Using Critical Thinking: A Concise Guide to Informed Research

How to Evaluate Sources Using Critical Thinking

In today’s world, the internet provides us with a wealth of information , but not all of it is trustworthy. Knowing how to evaluate sources is an essential skill, especially when conducting research or seeking reliable information. Critical thinking plays a vital role in this process, as it allows individuals to assess the credibility, relevance, and quality of sources.

When evaluating sources, it is crucial to identify the purpose and audience of the content, as well as the objectivity and potential biases that may be present. In addition, it is important to assess the relevance and value of the information provided, and consider its currency and accuracy. Recognizing any potential errors or limitations in the sources can help eliminate unreliable information from your research or decision-making process.

Key Takeaways

  • Critical thinking is essential for evaluating the credibility, relevance, and quality of sources.
  • Consideration of purpose, audience, and objectivity helps in assessing the reliability of information.
  • Evaluating currency, accuracy, and potential limitations leads to well-informed decision making.

Identifying Reliable Sources

When conducting research, it is essential to use credible sources to ensure the accuracy and validity of your work. To identify reliable sources, it’s crucial to employ critical thinking skills and assess each source’s authority, publisher, credentials, and affiliations.

A credible source is typically authored by a person with relevant expertise in the field. Assessing the author’s credentials, such as their degrees, certifications, or professional experience, can provide insight into their authority on the subject. Additionally, considering the author’s affiliations and potential biases can help determine the reliability of the information presented.

Publisher reputation is another important factor when evaluating sources. Reputable publishers, such as peer-reviewed journals, academic institutions, or respected news organizations, undergo rigorous editorial processes to ensure the accuracy of their content. Checking the publisher’s guidelines and standards can provide further assurance of a source’s reliability.

When analyzing the content itself, the presence of citations and references adds credibility, as it demonstrates that the author’s claims are based on existing research. Well-reasoned and balanced arguments, supported by evidence, are also indicators of a reputable source.

In some cases, it may be useful to compare multiple sources to validate the information and identify potential discrepancies. This is particularly helpful in situations where not all sources may be equally trustworthy.

Applying these critical thinking skills can help ensure that the sources you choose for your research are reliable and contribute to a well-founded and accurate final product.

Assessing Source Credibility

Evaluating sources using critical thinking is essential for determining their credibility. Assessing source credibility involves examining various aspects of the information and the source itself to ensure its accuracy, relevance, and trustworthiness.

The first step in assessing credibility is to examine the relevance of the source to the research topic. The information provided should directly contribute to understanding, answering, or supporting the subject in question. Irrelevant sources can lead to inaccurate conclusions and detract from the validity of the research.

Credibility is another important factor to consider. This involves evaluating whether the source is authored by experts in the field, affiliated with a reputable institution, or published in a well-known and respected journal or platform. A credible source will have expertise or qualifications that lend authority to the presented information.

Determining the trustworthiness of a source can be achieved by considering the accuracy of the information and data provided. Cross-referencing the mentioned facts with other reputable sources may reveal inconsistencies that suggest unreliable information. Trustworthy sources often cite their sources and provide enough detail to verify their claims.

Bias and potential biases also play a role in assessing source credibility. This involves identifying any underlying perspectives, personal or institutional, that may affect the reliability of the information. A biased source may present skewed data or manipulate the facts to support a specific agenda. To avoid falling for biased information, consider searching for opposing views and alternate explanations that challenge the original source’s claims.

In summary, evaluating source credibility requires critical thinking applied to various aspects of the information and its source. Considering factors such as relevance, credibility, trustworthiness, and potential biases enables researchers to confidently select sources that contribute to the accuracy and validity of their work.

The Role of Critical Thinking Skills

Critical thinking skills play a crucial role in evaluating sources for their credibility, relevance, and accuracy. These skills involve the use of logic, analysis, and reflection to ensure that the information gathered from various sources is trustworthy and reliable. By applying critical thinking, individuals can make informed decisions and develop well-founded arguments, both in academic and professional settings.

One aspect of critical thinking involves questioning the information presented by a source. This includes assessing the author’s credentials, the publication date, and the source’s purpose. By considering these factors, individuals can determine if the source is biased, outdated, or unreliable. For example, examining the author’s expertise in the subject matter helps to establish whether the information provided can be trusted.

In addition to questioning the source, critical thinking skills also involve analyzing the evidence and arguments presented. This process requires a careful examination of the logic, consistency, and coherence of the information. By looking for gaps in reasoning or identifying unsupported claims, individuals can assess the strength of the source’s argument and its relevance to their own research or project.

Experience also plays a significant role in developing and applying critical thinking skills. As individuals encounter various sources and engage in different research projects, they can become more adept at identifying trustworthy information. Experience helps refine their ability to discern between credible and unreliable sources, ensuring that the evidence used in their work is accurate and well-founded.

In conclusion, critical thinking skills are vital for evaluating sources and determining their credibility, relevance, and accuracy. By utilizing logic, analysis, questioning, and experience, individuals can ensure that the information they gather is reliable, unbiased, and valuable for their purposes.

Understanding the Purpose and Audience

Evaluating sources using critical thinking involves understanding the purpose and audience of a piece of information. This allows readers to assess the credibility and relevance of a source more effectively.

One of the first steps in evaluating a source is to identify its purpose. The purpose may be to inform, persuade, entertain, or express an opinion. Knowing the purpose behind the content helps readers determine whether the information provided aligns with their own goals and research interests. For example, an academic article seeking to inform would differ in tone and depth from a blog post expressing a personal opinion.

Considering the audience also plays a significant role in critically evaluating sources. Different sources might be aimed at different audiences, such as experts, general readers, or specific demographic groups. Assessing the intended audience assists in determining the suitability of a source for one’s research. A source intended for experts might be more in-depth and technical, while a general audience source could provide a broader overview.

The scope and depth of a source should also be analyzed during the evaluation process. Scope refers to the range of topics covered, whereas depth refers to the level of detail provided. Readers should ensure that the source adequately covers the subject matter they are researching. A source with greater depth and a narrow focus might be more suitable for specialized research, while a broader source could be useful for general understanding.

Tone is an important aspect of a source that can reveal the author’s perspective and potential biases. A neutral tone indicates an objective approach, while passionate or persuasive tones could indicate bias or an attempt to sway readers’ opinions. Recognizing the tone helps readers better understand the author’s intent and the reliability of the information provided.

By confidently assessing the purpose, audience, scope, depth, and tone of a source, readers can better evaluate the credibility and relevance of the information provided. This critical thinking approach supports well-informed research and decision-making.

Evaluating the Quality of Information

Evaluating the quality of information is an essential aspect of critical thinking and information literacy. As researchers, we must be able to determine the credibility and relevance of the sources we use for valid and accurate decision-making. To achieve this, apply consistent standards when evaluating sources, and be aware of your own biases and assumptions when encountering information.

The first step in evaluating the quality of information is to examine its source. Consider factors such as the author’s qualifications, the publication date, and the publisher’s reputation. Sources should be both recent and reputable to ensure that the data and facts presented are accurate and up-to-date.

In addition to the source, the content of the information itself should be scrutinized. Look for well-structured arguments, reliable evidence, and comprehensive data. Information should be easily verifiable, and any claims made by the author should be supported by appropriate facts or data. It’s also crucial to identify any limitations or potential biases present in the information.

Compare the information you’ve found with other sources, looking for consistency and agreement among multiple sources. This process not only helps to corroborate the credibility of the information but also highlights areas where further research may be needed.

Another useful approach to evaluating information quality is the application of information literacy. This involves understanding the purpose of the information, its intended audience, and any potential consequences of using the information. By keeping these factors in mind, you can better assess the suitability of the information for your specific research needs.

In summary, when evaluating the quality of information, be sure to consider factors such as the source’s credibility, the content’s relevance and accuracy, and the application of information literacy. By maintaining a confident, knowledgeable, and neutral approach in your assessment, you can ensure the information you use is of high quality and supports your research effectively.

Determining the Relevance and Value of Sources

Evaluating sources using critical thinking involves assessing their relevance and value to your research topic. Relevance refers to how closely the source’s information aligns with your research question, while value indicates the contribution it makes to your understanding of the topic.

To determine the relevance of a source, first consider whether its content directly addresses your research question or provides information that is applicable to your topic. Analyze the main arguments and conclusions of the source to see if they align with your research goals. Also, take note of any biases or opinions the author may have that could affect the source’s relevance.

In addition, it is essential to assess the value of a source by carefully examining its arguments and the evidence supporting them. A valuable source will present well-reasoned, logical arguments backed by appropriate evidence. Check for any claims that appear to be exaggerated, misleading, or false as these can harm the credibility of the source.

When evaluating the value of sources, it is useful to consider the following factors:

  • Authority : Assess the author’s credentials and expertise to determine if they hold the necessary qualifications to speak on the subject matter. Experts in a particular field are likely to provide more valuable and reliable information.
  • Accuracy : Ensuring the source contains accurate information is crucial. Verify that the data and facts presented in the source are accurate and compare them with other credible sources, if necessary.
  • Timeliness : The currency of the information is another essential factor to consider. In some fields, especially rapidly changing ones, older information may be outdated, and newer sources are more valuable.
  • Objectivity : A valuable source should present a balanced perspective on the topic. Be cautious of sources that promote a single viewpoint or are heavily influenced by the author’s personal opinions. Objective sources are more likely to provide value to your research.

By considering these factors while evaluating sources using critical thinking, you can effectively determine their relevance and value, ensuring the quality and credibility of your research.

Investigating the Currency and Timing of Sources

When evaluating sources, it is crucial to investigate the currency and timing of the information presented. This involves assessing whether the content is up-to-date, relevant, and appropriate for the topic being researched. Evaluating the currency of a source helps to determine its overall reliability and credibility for the research.

Firstly, consider the publication or posting date of the information. Determine if it’s recent and whether the content has been revised or updated since it was initially published. A source that is current and up-to-date indicates that the author is actively maintaining the information, which could lead to more reliable conclusions in the research. Keep in mind that the importance of currency may vary depending on the topic and the discipline. In some fields, like technology and medicine, current sources are crucial, while in others, like history or literature, older sources may still be relevant.

To further assess the currency of a source, examine its references and citations. Do they include recent research and publications? Are there any discrepancies between the cited sources and the content of the material being assessed? If the references consist mainly of older publications, consider whether the author has overlooked recent or updated research that could impact their conclusions. In addition, take note of any data or statistics used, ensuring that they come from reputable sources and maintain relevance to the topic at hand.

Another aspect to consider is the agreement among experts in the field. If the majority of experts in the area of research concur on the information presented by the source, this could be an indicator of the source’s currency and reliability. On the other hand, if there seems to be significant disagreement about the content, it might be worth exploring more current sources to see if any new data, research or updates have emerged that could impact the credibility of the information.

In sum, investigating the currency and timing of sources is an essential step in evaluating their validity for a research project. By examining factors such as publication dates, revisions, references, and expert agreement, researchers can ensure that the sources they use contribute to a well-informed, relevant, and current understanding of their topic.

Recognizing Bias and Objectivity

When evaluating sources for credibility, it is essential to recognize bias and objectivity. Bias refers to an inclination or perspective that affects a person’s judgment or evaluation, often leading to distorted or one-sided opinions. On the other hand, objectivity is presenting information in a fair and balanced manner, acknowledging counterarguments and alternative perspectives.

To recognize bias in a source, one must pay close attention to the tone, language, and style of the writing. Biased sources often exhibit strong emotional language, subjective terms, and a lack of evidence to support their claims. Additionally, biased authors may disregard counterarguments or dismiss them without proper consideration.

Objectivity in a source can be identified by the presence of a balanced presentation of information, which acknowledges various perspectives and counterarguments. Objective sources will provide evidence and cite credible sources to support their claims. They maintain a neutral tone and avoid using emotionally charged language.

In order to evaluate sources effectively, one should remain aware of their own biases and assumptions. This can be achieved by considering alternative viewpoints and testing hypotheses against relevant criteria. Applying consistent standards when evaluating sources helps to ensure a fair assessment of the information.

Ultimately, evaluating sources using critical thinking involves recognizing bias and objectivity, while also considering one’s own perspective. By doing so, one can make well-informed decisions based on credible, balanced, and reliable information.

Identifying Potential Errors and Limitations

When evaluating sources using critical thinking, it is crucial to identify potential errors and limitations present in the information provided. This helps ensure the accuracy and reliability of the data. Errors can occur at any point in the research process, while limitations are inherent weaknesses or constraints that affect the findings’ overall validity.

One common error in research is sampling bias, which occurs when the sample is not representative of the whole population. This can lead to skewed conclusions and weaken the overall strength of the research. To avoid this, carefully consider the data collection methods, sample size, and sampling techniques involved in the research.

Another factor to consider is the presence of any logical fallacies or cognitive biases. Errors in reasoning can seriously undermine the credibility of a source. Examples include the ad hominem fallacy, where the focus is on attacking the person rather than the argument, and confirmation bias, where people tend to favor information that confirms their pre-existing beliefs. Be aware of these common fallacies and biases when evaluating arguments.

Furthermore, take note of the methodology used in the research. A study with rigorous methodology, which meticulously controls various factors and variables, has a higher degree of reliability. Conversely, poor methodology can introduce errors and weaken the research’s reliability. When examining the methodology, look for a clear statement of the research question, an explanation of the research design and data collection procedures, and a transparent presentation of results.

Limitations in research are often unavoidable and may affect the study’s generalizability, validity, or reliability. When examining sources, discern the limitations acknowledged by the authors themselves. These may include a small sample size, lack of control over external variables, or potential issues with the data collection method. Identifying these limitations allows for a better understanding of the study’s weaknesses and context.

Lastly, evaluate the relevance of the source to your particular research question or problem. A source may have strong methodology, robust findings, and few errors, but if it does not address the issue you are investigating, it might not contribute meaningfully to your research.

In summary, when using critical thinking to evaluate sources, it is essential to identify potential errors and limitations. This will enable you to assess the credibility, reliability, and overall quality of the information, ensuring that you base your conclusions on sound evidence.

The CRAAP Test for Source Evaluation

The CRAAP Test is a method designed to evaluate sources using critical thinking skills, specifically focusing on five criteria: Currency, Relevance, Authority, Accuracy, and Purpose. By applying these criteria, researchers and readers can determine the credibility of a source and its suitability for their particular needs.

Currency refers to the timeliness of the information. This criterion demands attention to the publication date, any revisions or updates, and whether the source has current or outdated information. Currency is crucial because it ensures that the information used in the research is up-to-date and accurate at the time of use.

Relevance is the degree to which the information relates to the topic being researched. To assess relevance, consider the scope, depth, and target audience of the source. A relevant source should be in alignment with the research question and offer insight or evidence to support the argument being made.

Authority concerns the credibility of the author, organization, or publication responsible for the source. The assessment of authority includes reviewing the author’s credentials, expertise, and affiliation, as well as verifying the reputation of the organization or publication. Established and reputable sources are more likely to produce reliable information.

Accuracy evaluates the validity and reliability of the information in the source. Accuracy can be verified by checking for factual errors, examining the methodology used to collect data, and assessing whether the information is supported by evidence. A source should be free from significant errors and demonstrate that proper research methods were employed.

Purpose is the goal or objective behind the information in the source. Analyzing the purpose helps to identify any potential biases or underlying motives. To evaluate purpose, consider the author’s intent, the target audience, and whether the information is presented objectively or with an ulterior agenda.

Implementing the CRAAP Test as a guideline for source evaluation ensures that the information used in research is credible, relevant, authoritative, accurate, and purposeful. Applying these criteria will strengthen arguments and improve the overall quality of research.

Applying Source Evaluation in Different Contexts

Evaluating sources using critical thinking is not limited to academic research; it plays a vital role in various contexts, such as personal life, college, humanities, and organizations. By applying source evaluation, individuals gain the ability to assess the credibility and relevance of information, leading to informed decisions and well-constructed arguments.

In personal life , individuals often encounter various sources of information, such as news articles, social media, and online resources. Evaluating these sources helps in distinguishing reliable information from misinformation or biased perspectives. For instance, when making major life decisions, such as choosing a career or purchasing a house, individuals must critically assess the credibility of financial advice, job market trends, and real estate listings to make informed choices.

Within the college setting, students must develop essential critical thinking skills to evaluate sources for their academic assignments and research projects. Evaluating sources can significantly impact their grades, as well as their ability to develop strong arguments and contribute to scholarly discussions. Students need to employ research methods that prioritize reliability and relevance when selecting sources, such as peer-reviewed articles, authoritative reports, and primary materials related to their areas of study.

In humanities disciplines, scholars often analyze historical documents, pieces of literature, and other cultural artifacts. These fields require nuanced evaluation methods due to the subjective and interpretive nature of the subjects. Critical thinking enables scholars to consider the context in which a source was produced, assess the intentions and biases of the author, and compare sources with different perspectives to form a comprehensive understanding of the subject matter.

Research methods play a crucial role in evaluating sources, especially in academic and professional settings. A systematic approach to source evaluation, such as following established criteria like the CRAAP test (Currency, Relevance, Authority, Accuracy, and Purpose), ensures that the chosen sources align with research objectives and contribute to robust arguments. Additionally, researchers must evaluate the methodologies used in the sources to determine the quality and reliability of the research findings.

In an organization , source evaluation holds significance for decision-making processes, innovation, and strategic planning. Organizations rely on accurate and reliable data to make well-informed decisions that can impact their growth and success. Business professionals must critically analyze sources like industry reports, market trends, financial data, and other relevant materials to develop effective strategies grounded in verifiable information.

In summary, applying source evaluation using critical thinking is crucial in various contexts for making informed decisions, engaging in scholarly discussions, and developing well-grounded arguments. By employing appropriate research methods, evaluating sources based on their credibility and relevance, and considering context-specific nuances, individuals can effectively assess and utilize information in their personal lives, academia, and professional settings.

Consequences of Misleading Information

Misleading information can have significant consequences in various aspects of society. When people encounter false information or propaganda, they may unknowingly make decisions based on inaccurate or incomplete data.

One consequence of misleading information is that it can perpetuate false beliefs and can reinforce an individual’s existing views. As a result, people might take actions based on these beliefs, which could lead to unintended and potentially harmful outcomes. For example, misinformation regarding medical treatments might cause people to ignore scientific advice, leading to negative health consequences.

Misleading information can also distort public discourse and hinder society’s ability to address pressing issues effectively. When misinformation is used to promote a hidden agenda, people may be swayed by emotional appeals or other non-rational factors, which can exacerbate existing divisions and generate conflicts.

In some cases, misleading information can serve as a tool for political manipulation. When political actors use false or distorted information to shape public opinion, they can gain power or maintain control. This can threaten democratic processes, making it more challenging for citizens to hold their representatives accountable.

To mitigate the consequences of misleading information, it is essential to apply critical thinking skills when evaluating sources. By considering factors such as credibility, accuracy, and potential biases, individuals can distinguish between reliable and untrustworthy information, reducing the impact of hidden agendas or false claims.

Evaluating sources is an essential aspect of critical thinking that helps to ensure the secure foundation of knowledge. Through a rigorous evaluation process, the value of information can be assessed accurately, leading to better decision making and a higher quality of work. By applying the principles of critical thinking, individuals can gain a deeper understanding of the subject matter and develop a well-rounded perspective based on reliable sources.

Various methods can assist in evaluating the credibility of sources, such as the CRAAP test . This acronym represents Currency, Relevance, Authority, Accuracy, and Purpose as discussed here . By considering each of these factors, individuals can effectively distinguish between trustworthy and unreliable sources, thereby improving their overall knowledge and experience when conducting research.

The importance of consistent evaluation standards cannot be overstated. Recognizing personal biases and assumptions during the process bolsters the value of critical thinking and ensures a more balanced perspective, leading to stronger, more robust arguments. By remaining vigilant and applying these principles consistently, the quality of the information consumed is significantly enhanced.

In conclusion, honing one’s critical thinking skills and evaluating sources rigorously enables individuals to achieve a heightened sense of knowledge and understanding. By carefully considering the quality, relevance, and authority of each source encountered, a more reliable foundation of information is established, leading to better decision making and ultimately enhancing the overall quality of work and personal experience.

You may also like

Critical Thinking Questions for your Boyfriend

Critical Thinking Questions for your Boyfriend

There’s no denying that romantic relationships thrive on good communication. So what better way to communicate with your partner than with some […]

Divergent Thinking and Memory

Divergent Thinking and Memory: Unleashing Your Cognitive Potential

Divergent thinking is a cognitive process that involves generating multiple unique ideas or solutions to a given problem or question. It is […]

Critical Thinking Skills and the Academic Performance of Students

Critical Thinking Skills and the Academic Performance of Students

Critical thinking skills are essential for children to develop and can be used in any area of life. By encouraging children to […]

Best Careers for Problem Solving

Best Careers for Problem Solving: Top Opportunities for Critical Thinkers

Problem-solving is a highly sought-after skill in today’s job market, as it plays a critical role in finding solutions to complex problems […]

These 2 internal biases cause us to fall for misinformation - here's why

people walking around - confirmation bias

Everyone is subject to internal biases. But we are not powerless to stop them. Image:  Photo by Timon Studler on Unsplash

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Alex Edmans

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved .chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, fairer economies.

  • Confirmation bias is the temptation to accept evidence uncritically if it confirms what one would like to be true.
  • Black-and-white thinking is another form of bias that entails viewing the world in binary terms.
  • We can overcome these biases by asking simple questions and thinking critically.

“Check the facts.” “Examine the evidence.” “Correlation is not causation.”

We’ve heard these phrases enough times that they should be in our DNA. If true, misinformation would never get out of the starting block. But countless examples abound of misinformation spreading like wildfire.

This is because our internal, often subconscious, biases cause us to accept incorrect statements at face value. Nobel Laureate Daniel Kahneman refers to our rational, slow thought process — which has mastered the above three phrases — as System 2, and our impulsive, fast thought process — distorted by our biases — as System 1. In the cold light of day, we know that we shouldn’t take claims at face value, but when our System 1 is in overdrive, the red mist of anger clouds our vision.

Have you read?

Curbing misinformation in india: how does a fact-checking whatsapp helpline work, disinformation is a threat to our trust ecosystem. experts explain how to curb it, confirmation bias.

One culprit is confirmation bias – the temptation to accept evidence uncritically if it confirms what we’d like to be true, and to reject a claim out of hand if it clashes with our worldview. Importantly, these biases can be subtle; they’re not limited to topics such as immigration or gun control where emotions run high. It’s widely claimed that breastfeeding increases child IQ, even though correlation is not causation because parental factors drive both. But, because many of us would trust natural breastmilk over the artificial formula of a giant corporation, we lap this claim up.

Confirmation bias is hard to shake. In a study , three neuroscientists took students with liberal political views and hooked them up to a functional magnetic resonance imaging scanner. The researchers read out statements the participants previously said they agreed with, then gave contradictory evidence and measured the students’ brain activity. There was no effect when non-political claims were challenged, but countering political positions triggered their amygdala. That’s the same part of the brain that’s activated when a tiger attacks you, inducing a ‘fight-or-flight’ response. The amygdala drives our System 1, and drowns out the prefrontal cortex which operates our System 2.

Confirmation bias looms large for issues where we have a pre-existing opinion. But for many topics, we have no prior view. If there’s nothing to confirm, there’s no confirmation bias, so we’d hope we can approach these issues with a clear head.

Black-and-white-thinking

Unfortunately, another bias can kick in: black-and-white thinking. This bias means that we view the world in binary terms. Something is either always good or always bad, with no shades of grey.

To pen a bestseller, Atkins didn’t need to be right. He just needed to be extreme.

The bestselling weight-loss book in history, Dr Atkins’ New Diet Revolution , benefited from this bias. Before Atkins, people may not have had strong views on whether carbs were good or bad. But as long as they think it has to be one or the other, with no middle ground, they’ll latch onto a one-way recommendation. That’s what the Atkins diet did. It had one rule: Avoid all carbs. Not just refined sugar, not just simple carbs, but all carbs. You can decide whether to eat something by looking at the “Carbohydrate” line on the nutrition label, without worrying whether the carbs are complex or simple, natural or processed. This simple rule played into black-and-white thinking and made it easy to follow.

Overcoming our biases

So, what do we do about it? The first step is to recognize our own biases. If a statement sparks our emotions and we’re raring to share or trash it, or if it’s extreme and gives a one-size-fit-all prescription, we need to proceed with caution.

The second step is to ask questions, particularly if it’s a claim we’re eager to accept. One is to “consider the opposite”. If a study had reached the opposite conclusion, what holes would you poke in it? Then, ask yourself whether these concerns still apply even though it gives you the results you want.

Take the plethora of studies claiming that sustainability improves company performance. What if a paper had found that sustainability worsens performance? Sustainability supporters would throw up a host of objections. First, how did the researchers actually measure sustainability? Was it a company’s sustainability claims rather than its actual delivery? Second, how large a sample did they analyze? If it was a handful of firms over just one year, the underperformance could be due to randomness; there’s not enough data to draw strong conclusions. Third, is it causation or just correlation? Perhaps high sustainability doesn’t cause low performance, but something else, such as heavy regulation, drives both. Now that you’ve opened your eyes to potential problems, ask yourselves if they plague the study you’re eager to trumpet.

A second question is to “consider the authors”. Think about who wrote the study and what their incentives are to make the claim that they did. Many reports are produced by organizations whose goal is advocacy rather than scientific inquiry. Ask “would the authors have published the paper if it had found the opposite result?” — if not, they may have cherry-picked their data or methodology.

In addition to bias, another key attribute is the authors’ expertise in conducting scientific research. Leading CEOs and investors have substantial experience, and there’s nobody more qualified to write an account of the companies they’ve run or the investments they’ve made. However, some move beyond telling war stories to proclaiming a universal set of rules for success – but without scientific research we don’t know whether these principles work in general. A simple question is “If the same study was written by the same authors, with the same credentials, but found the opposite results, would you still believe it?”

Today, anyone can make a claim, start a conspiracy theory or post a statistic. If people want it to be true it will go viral. But we have the tools to combat it. We know how to show discernment, ask questions and conduct due diligence if we don’t like a finding. The trick is to tame our biases and exercise the same scrutiny when we see something we’re raring to accept.

This article is adapted from May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases – and What We Can Do About It (Penguin Random House, 2024).

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Fairer Economies .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

recognizing bias requires critical thinking

Inclusion in US financial services is at an all-time high – and tech can take us further

John Hope Bryant

April 11, 2024

recognizing bias requires critical thinking

Explainer: What is a living wage and how is it different from the minimum wage?

Victoria Masterson

April 9, 2024

recognizing bias requires critical thinking

Building equitable futures through the Ibero-American Social and Solidarity Economy Network

Juan Manuel Martinez Louvier

March 5, 2024

recognizing bias requires critical thinking

How countries can save millions by prioritising young people's sexual and reproductive health

Tomoko Fukuda and Andreas Daugaard Jørgensen

March 4, 2024

recognizing bias requires critical thinking

5 ways to drive investment in the most challenging fragile places

Andrew Herscowitz

February 27, 2024

recognizing bias requires critical thinking

Are global value chains leaving Indonesian SMEs behind?

Arip Tirta and Prasanti W. Sarli

February 20, 2024

IMAGES

  1. The benefits of critical thinking for students and how to develop it

    recognizing bias requires critical thinking

  2. Critical thinking

    recognizing bias requires critical thinking

  3. Cultivating an Underwriter’s Competency: The role of heuristics and

    recognizing bias requires critical thinking

  4. 78 Cognitive Bias Examples (2024)

    recognizing bias requires critical thinking

  5. Acknowledge and Identify Bias

    recognizing bias requires critical thinking

  6. CRITICAL THINKING SKILLS

    recognizing bias requires critical thinking

VIDEO

  1. Understanding Unconscious Bias

  2. How to Master Your Emotions

  3. Exploring the Self-Serving Bias: Understanding Cognitive Biases

  4. Recognizing the Unseen: Implicit Bias & Microaggressions

  5. Is New York City's AI Bias Law Really Effective?

  6. NISS/FCSM AI in Federal Government: Challenges in Confidentiality, Privacy, and Ethics for Use of AI

COMMENTS

  1. Recognizing Bias: A Problem Solving and Critical Thinking Skills Guide

    Sources of Bias. Recognizing bias is an essential part of problem solving and critical thinking. It is important to be aware of potential sources of bias, such as personal opinions, values, or preconceived notions. Bias can have a profound effect on decisions, leading to outcomes that are not based on facts or evidence.

  2. Cognitive Bias Is the Loose Screw in Critical Thinking

    Learning these biases, and being on the alert for them when you make a decision to accept a belief or opinion, will help you become more effective at critical thinking. Wikipedia lists 197 ...

  3. Critical thinking

    Teaching bias and critical thinking skills. By following this step-by-step process, I believe we can talk about bias with our students and increase the chances of them incorporating critical thinking skills into their lives. 1) Choose a bias. Search for a list of biases and read the basic definitions. 2) Learn about it.

  4. How to Identify Cognitive Bias: 12 Examples of Cognitive Bias

    Identifying the biases you experience and purport in your everyday interactions is the first step to understanding how our mental processes work, which can help us make better, more informed decisions. Cognitive biases are inherent in the way we think, and many of them are unconscious.

  5. 2.2 Overcoming Cognitive Biases and Engaging in Critical ...

    Classify and describe cognitive biases. Apply critical reflection strategies to resist cognitive biases. To resist the potential pitfalls of cognitive biases, we have taken some time to recognize why we fall prey to them. Now we need to understand how to resist easy, automatic, and error-prone thinking in favor of more reflective, critical ...

  6. Bias

    Recognizing and considering the implication of bias in a source is an important component in critical thinking. "Bias is a natural inclination for or against an idea, object, group, or individual. It is often learned and is highly dependent on variables like a person's socioeconomic status, race, ethnicity, educational background, etc.

  7. Bias

    Wittebols (2019) defines it as a "tendency to be psychologically invested in the familiar and what we believe and less receptive to information that contradicts what we believe" (p. 211). Quite simply, we may reject information that doesn't support our existing thinking. This can manifest in a number of ways with Hahn and Harris (2014 ...

  8. LibGuides: Critical Thinking About Sources: Recognizing Bias

    A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world.

  9. Cognitive Biases and Their Influence on Critical Thinking and

    Researchers have discovered 200 cognitive biases that result in inaccurate or irrational judgments and decisions, ranging from actor-observer to zero risk bias.

  10. Contextual Debiasing and Critical Thinking: Reasons for Optimism

    In this article I argue that most biases in argumentation and decision-making can and should be counteracted. Although biases can prove beneficial in certain contexts, I contend that they are generally maladaptive and need correction. Yet critical thinking alone seems insufficient to mitigate biases in everyday contexts. I develop a contextualist approach, according to which cognitive ...

  11. BUS403 (2016.A.01): Recognizing Stereotypes and Bias

    Critical thinking can help us acquire knowledge, improve our theories, and strengthen arguments. We can use critical thinking to enhance work processes and improve social institutions. Good critical thinking might be seen as the foundation of science and liberal democratic society. Science requires the critical use of reason in experimentation ...

  12. Are You Aware of Your Biases?

    by. Carmen Acton. February 04, 2022. Getty Images/Carol Yepes. Summary. Often, it's easy to "call out" people when we notice their microaggressions or biased behaviors. But it can be equally ...

  13. LibGuides: Critical Thinking & Evaluating Information: Bias

    Confirmation Bias - "Originating in the field of psychology; the tendency to seek or favour new information which supports one's existing theories or beliefs, while avoiding or rejecting that which disrupts them." Addition of definition to the Oxford Dictionary in 2019. "confirmation, n." OED Online, Oxford University Press, December 2020 ...

  14. Overcoming Personal Biases to Enhance Critical Thinking

    Rigidity in thought can perpetuate biases, hindering your critical thinking growth. Overcoming personal biases is an ongoing journey that requires self-awareness, empathy, and an open mind. By ...

  15. Bias and Critical Thinking

    Here, we show three different perspective in order to enable a more reflexive understanding of bias. The first is the understanding how different forms of biases relate to design criteria of scientific methods. The second is the question which stage in the application of methods - data gathering, data analysis, and interpretation of results ...

  16. 2.2: Overcoming Cognitive Biases and Engaging in Critical Reflection

    To promote good critical thinking, put yourself in a frame of mind that allows critical reflection. ... Recall from the previous section that rational thinking requires effort and takes longer. However, it will likely result in more accurate thinking and decision-making. ... (1960) conducted early experiments identifying this kind of bias. He ...

  17. What Are Critical Thinking Skills and Why Are They Important?

    Examples of common critical thinking skills. Critical thinking skills differ from individual to individual and are utilized in various ways. Examples of common critical thinking skills include: Identification of biases: Identifying biases means knowing there are certain people or things that may have an unfair prejudice or influence on the ...

  18. Leadership Decision-Making: Overcoming Biases with Critical Thinking

    Learn how to recognize and counteract biases in leadership decision-making with critical thinking strategies for fairer outcomes.

  19. 10 Common Biases and How to Overcome Them

    Engage in critical thinking and ask yourself if you may be overlooking some crucial information. Challenge your assumptions and be open to revising your beliefs considering new evidence. 2 ...

  20. Bridging critical thinking and transformative learning: The role of

    In recent decades, approaches to critical thinking have generally taken a practical turn, pivoting away from more abstract accounts - such as emphasizing the logical relations that hold between statements (Ennis, 1964) - and moving toward an emphasis on belief and action.According to the definition that Robert Ennis (2018) has been advocating for the last few decades, critical thinking is ...

  21. What About Assumptions?

    Assumptions are beliefs or ideas that are believed to be true without proof or evidence and are used to support reasoning. This lack of verification can create bias when thinking critically. Like any human activity, the practice of critical thinking requires several basic assumptions to make sense. For people who don't share these assumptions ...

  22. Critical Thinking, Intelligence, and Unsubstantiated Beliefs: An

    Critical thinking questions are often situated in real-world examples or in simulations of them which are designed to detect thinking errors and bias. As described in Halpern and Butler ( 2018 ), an item like one on the "Halpern Critical Thinking Assessment" (HCTA) provides respondents with a mock newspaper story about research showing that ...

  23. A Crash Course in Critical Thinking

    Neil Browne, author of the seminal Asking the Right Questions: A Guide to Critical Thinking, has been a pioneer in presenting critical thinking as a question-based approach to making sense of the ...

  24. How to Evaluate Sources Using Critical Thinking: A Concise Guide to

    Recognizing personal biases and assumptions during the process bolsters the value of critical thinking and ensures a more balanced perspective, leading to stronger, more robust arguments. By remaining vigilant and applying these principles consistently, the quality of the information consumed is significantly enhanced.

  25. The Power Of Critical Thinking: Enhancing Decision-Making And ...

    Critical thinking is a powerful cognitive tool that empowers individuals to navigate the complexities of the modern world. ... recognizing biases and considering multiple perspectives. It requires ...

  26. These 2 internal biases cause us to fall for misinformation

    Confirmation bias is the temptation to accept evidence uncritically if it confirms what one would like to be true. Black-and-white thinking is another form of bias that entails viewing the world in binary terms. We can overcome these biases by asking simple questions and thinking critically.