Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

AI Should Augment Human Intelligence, Not Replace It

  • David De Cremer
  • Garry Kasparov

artificial intelligence vs human intelligence essay pdf

Artificial intelligence isn’t coming for your job, but it will be your new coworker. Here’s how to get along.

Will smart machines really replace human workers? Probably not. People and AI both bring different abilities and strengths to the table. The real question is: how can human intelligence work with artificial intelligence to produce augmented intelligence. Chess Grandmaster Garry Kasparov offers some unique insight here. After losing to IBM’s Deep Blue, he began to experiment how a computer helper changed players’ competitive advantage in high-level chess games. What he discovered was that having the best players and the best program was less a predictor of success than having a really good process. Put simply, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” As leaders look at how to incorporate AI into their organizations, they’ll have to manage expectations as AI is introduced, invest in bringing teams together and perfecting processes, and refine their own leadership abilities.

In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to the global economy by 2030 . As AI is changing how companies work, many believe that who does this work will change, too — and that organizations will begin to replace human employees with intelligent machines . This is already happening: intelligent systems are displacing humans in manufacturing, service delivery, recruitment, and the financial industry, consequently moving human workers towards lower-paid jobs or making them unemployed. This trend has led some to conclude that in 2040 our workforce may be totally unrecognizable .

  • David De Cremer is a professor of management and technology at Northeastern University and the Dunton Family Dean of its D’Amore-McKim School of Business. His website is daviddecremer.com .
  • Garry Kasparov is the chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative. He writes and speaks frequently on politics, decision-making, and human-machine collaboration. Kasparov became the youngest world chess champion in history at 22 in 1985 and retained the top rating in the world for 20 years. His famous matches against the IBM super-computer Deep Blue in 1996 and 1997 were key to bringing artificial intelligence, and chess, into the mainstream. His latest book on artificial intelligence and the future of human-plus-machine is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (2017).

Partner Center

Human and Artificial Intelligence: A Critical Comparison

  • First Online: 30 June 2022

Cite this chapter

artificial intelligence vs human intelligence essay pdf

  • Thomas Fuchs 4  

921 Accesses

1 Citations

Advances in artificial intelligence and robotics increasingly call into question the distinction between simulation and reality of the human person. On the one hand, they suggest a computeromorphic understanding of human intelligence, and on the other, an anthropomorphization of AI systems. In other words: We increasingly conceive of ourselves in the image of our machines, while conversely we elevate our machines to new subjects. So what distinguishes human intelligence from artificial intelligence? The essay sets out a number of criteria for this.

Abridged version of an essay in the volume: T. Fuchs (2020). Verteidigung des Menschen. Grundfragen einer verkörperten Anthropologie . Frankfurt/M.: Suhrkamp, pp. 21–70.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

artificial intelligence vs human intelligence essay pdf

Demystifying the Intelligent Machine

artificial intelligence vs human intelligence essay pdf

Artificial Intelligence and the Concept of “Human Thinking”

artificial intelligence vs human intelligence essay pdf

Robotics and AI: How Technology May Change the Way We Shape Our Bodies and What This Does to the Mind

I. Pearson (2008). “The Future of Life. Creating Natural, Artificial, Synthetic and Virtual Organisms.” European Molecular Biology Organization (EMBO) Reports 9 (Supplement 1): 75–77. 3 Ray Kurzweil, as cited in L. Greenemeier (2010). “Machine Self-awareness.” Scientific American 302: 44–45.

R. Kurzweil (2005). The Singularity Is Near. When Humans Transcend Biology . New York: Penguin.

Cf. A. M. Turing (1950). “Computing Machinery and Intelligence.” Mind 59: 433–460.

C. Weller (2017). “Meet the first-ever robot citizen, a humanoid named Sophia that once said it would destroy humans.” Business Insider Nordic. Haettu , 30 Jg.

Hoffmann, E. T. A. (1960). “The Sandman,” in Ders, Fantasy and Night Plays . Munich: Winkler, pp. 331–363.

Leitgeb, V.-V. 2017. “Robot Mario to Care for Dementia Patients.” Süddeutsche Zeitung, online 24.11.2017. https://www.sueddeutsche.de/bayern/gesundheit-roboter-mario-soll-demenzkrankepflegen-1.3762375.

S. Pinker (1997). How the Mind Works . New York: Norton, p. 524 (transl. T. F.).

Metzinger, T. 1999. Subject and self-model. The perspectivity of phenomenal consciousness against the background of a naturalistic theory of mental representation . Paderborn: Mentis, p. 284.

J. R. Searle (1980). “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3: 417–457.

Cf. for the following in detail T. Fuchs (2018). Ecology of the Brain. The phenomenology and biology of the embodied mind . Oxford: Oxford University Press, esp. pp. 109–114.

See Damasio, A. 2010.  Self comes to Mind. Constructing the Conscious Brain . New York:Pantheon Books. Panksepp, J. 1998. Affective neuroscience: the foundations of human and animal emotions . Oxford, New York: Oxford University Press; and Fuchs (2017).

Searle (1980); see above, footnote 11.

Cf. B. Schölkopf, “Symbolic, Statistical and Causal Intelligence”, Lecture at the Marsilius-Kolleg of the University of Heidelberg, 16.07.2020.

Jonas, H. 1966. The Phenomenon of Life: Toward a Philosophical Biology. New York: Harper & Row, p. 110.

One example of this is the increasingly frequent assessments of the recidivism risk of offenders by AI systems in the USA (with an obvious bias to the disadvantage of people of colour). Here, opaque programs become assistant judges or even decision-making authorities (cf. L. Kirchner, J. Angwin, J. Larson, S. Mattu. 2016. “Machine Bias: There’s Software Used across the Country to Predict Future Criminals, and It’s Biased against Blacks.” Pro Publica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) .

Lenzen, M. 2018. Artificial intelligence. What it can do and what we can expect . Munich: Beck, p. 247.

Author information

Authors and affiliations.

Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University Hospital, Heidelberg, Germany

Thomas Fuchs

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas Fuchs .

Editor information

Editors and affiliations.

University of Heidelberg, Heidelberg, Germany

Rainer M. Holm-Hadulla

Department of Psychology, University of Heidelberg, Heidelberg, Germany

Joachim Funke

Institute of Pharmacy and Molecular Biotechnology, University of Heidelberg, Heidelberg, Germany

Michael Wink

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Fuchs, T. (2022). Human and Artificial Intelligence: A Critical Comparison. In: Holm-Hadulla, R.M., Funke, J., Wink, M. (eds) Intelligence - Theories and Applications. Springer, Cham. https://doi.org/10.1007/978-3-031-04198-3_14

Download citation

DOI : https://doi.org/10.1007/978-3-031-04198-3_14

Published : 30 June 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-04197-6

Online ISBN : 978-3-031-04198-3

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Reference Manager
  • Simple TEXT file

People also looked at

Conceptual analysis article, human- versus artificial intelligence.

www.frontiersin.org

  • TNO Human Factors, Soesterberg, Netherlands

AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.

Introduction: Artificial and Human Intelligence, Worlds of Difference

Artificial general intelligence at the human level.

Recent advances in information technology and in AI may allow for more coordination and integration between of humans and technology. Therefore, quite some attention has been devoted to the development of Human-Aware AI, which aims at AI that adapts as a “team member” to the cognitive possibilities and limitations of the human team members. Also metaphors like “mate,” “partner,” “alter ego,” “Intelligent Collaborator,” “buddy” and “mutual understanding” emphasize a high degree of collaboration, similarity, and equality in “hybrid teams”. When human-aware AI partners operate like “human collaborators” they must be able to sense, understand, and react to a wide range of complex human behavioral qualities, like attention, motivation, emotion, creativity, planning, or argumentation, (e.g. Krämer et al., 2012 ; van den Bosch and Bronkhorst, 2018 ; van den Bosch et al., 2019 ). Therefore these “AI partners,” or “team mates” have to be endowed with human-like (or humanoid) cognitive abilities enabling mutual understanding and collaboration (i.e. “human awareness”).

However, no matter how intelligent and autonomous AI agents become in certain respects, at least for the foreseeable future, they probably will remain unconscious machines or special-purpose devices that support humans in specific, complex tasks. As digital machines they are equipped with a completely different operating system (digital vs biological) and with correspondingly different cognitive qualities and abilities than biological creatures, like humans and other animals ( Moravec, 1988 ; Klein et al., 2004 ; Korteling et al., 2018a ; Shneiderman, 2020a ). In general, digital reasoning- and problem-solving agents only compare very superficially to their biological counterparts, (e.g. Boden, 2017 ; Shneiderman, 2020b ). Keeping that in mind, it becomes more and more important that human professionals working with advanced AI systems, (e.g. in military‐ or policy making teams) develop a proper mental model about the different cognitive capacities of AI systems in relation to human cognition. This issue will become increasingly relevant when AI systems become more advanced and are deployed with higher degrees of autonomy. Therefore, the present paper tries to provide some more clarity and insight into the fundamental characteristics, differences and idiosyncrasies of human/biological and artificial/digital intelligences. In the final section, a global framework for constructing educational content on this “Intelligence Awareness” is introduced. This can be used for the development of education and training programs for humans who have to use or “collaborate with” advanced AI systems in the near and far future.

With the application of AI systems with increasing autonomy more and more researchers consider the necessity of vigorously addressing the real complex issues of “human-level intelligence” and more broadly artificial general intelligence , or AGI, (e.g. Goertzel et al., 2014 ). Many different definitions of A(G)I have already been proposed, (e.g. Russell and Norvig, 2014 for an overview). Many of them boil down to: technology containing or entailing (human-like) intelligence , (e.g. Kurzweil, 1990 ). This is problematic. Most definitions use the term “intelligence”, as an essential element of the definition itself, which makes the definition tautological. Second, the idea that A(G)I should be human-like seems unwarranted. At least in natural environments there are many other forms and manifestations of highly complex and intelligent behaviors that are very different from specific human cognitive abilities (see Grind, 1997 for an overview). Finally, like what is also frequently seen in the field of biology, these A(G)I definitions use human intelligence as a central basis or analogy for reasoning about the—less familiar—phenomenon of A(G)I ( Coley and Tanner, 2012 ). Because of the many differences between the underlying substrate and architecture of biological and artificial intelligence this anthropocentric way of reasoning is probably unwarranted. For these reasons we propose a (non-anthropocentric) definition of “intelligence” as: “ the capacity to realize complex goals ” ( Tegmark, 2017 ). These goals may pertain to narrow, restricted tasks (narrow AI) or to broad task domains (AGI). Building on this definition, and on a definition of AGI proposed by Bieger et al. (2014) and one of Grind (1997) , we define AGI here as: “ Non-biological capacities to autonomously and efficiently achieve complex goals in a wide range of environments”. AGI systems should be able to identify and extract the most important features for their operation and learning process automatically and efficiently over a broad range of tasks and contexts. Relevant AGI research differs from the ordinary AI research by addressing the versatility and wholeness of intelligence, and by carrying out the engineering practice according to a system comparable to the human mind in a certain sense ( Bieger et al., 2014 ).

It will be fascinating to create copies of ourselves which can learn iteratively by interaction with partners and thus become able to collaborate on the basis of common goals and mutual understanding and adaptation, (e.g. Bradshaw et al., 2012 ; Johnson et al., 2014 ). This would be very useful, for example when a high degree of social intelligence of AI will contribute to more adequate interactions with humans, for example in health care or for entertainment purposes ( Wyrobek et al., 2008 ). True collaboration on the basis of common goals and mutual understanding necessarily implies some form of humanoid general intelligence. For the time being, this remains a goal on a far-off horizon. In the present paper we argue why for most applications it also may not be very practical or necessary (and probably a bit misleading) to vigorously aim or to anticipate on systems possessing “human-like” AGI or “human-like” abilities or qualities. The fact that humans possess general intelligence does not imply that new inorganic forms of general intelligence should comply to the criteria of human intelligence. In this connection, the present paper addresses the way we think about (natural and artificial) intelligence in relation to the most probable potentials (and real upcoming issues) of AI in the short- and mid-term future. This will provide food for thought in anticipation of a future that is difficult to predict for a field as dynamic as AI.

What Is “Real Intelligence”?

Implicit in our aspiration of constructing AGI systems possessing humanoid intelligence is the premise that human (general) intelligence is the “real” form of intelligence. This is even already implicitly articulated in the term “Artificial Intelligence”, as if it were not entirely real, i.e., real like non-artificial (biological) intelligence. Indeed, as humans we know ourselves as the entities with the highest intelligence ever observed in the Universe. And as an extension of this, we like to see ourselves as rational beings who are able to solve a wide range of complex problems under all kinds of circumstances using our experience and intuition, supplemented by the rules of logic, decision analysis and statistics. It is therefore not surprising that we have some difficulty to accept the idea that we might be a bit less smart than we keep on telling ourselves, i.e., “the next insult for humanity” ( van Belkom, 2019 ). This goes as far that the rapid progress in the field of artificial intelligence is accompanied by a recurring redefinition of what should be considered “real (general) intelligence.” The conceptualization of intelligence, that is, the ability to autonomously and efficiently achieve complex goals, is then continuously adjusted and further restricted to: “those things that only humans can do.” In line with this, AI is then defined as “the study of how to make computers do things at which, at the moment, people are better” ( Rich and Knight, 1991 ; Rich et al., 2009 ). This includes thinking of creative solutions, flexibly using contextual- and background information, the use of intuition and feeling, the ability to really “think and understand,” or the inclusion of emotion in an (ethical) consideration. These are then cited as the specific elements of real intelligence, (e.g. Bergstein, 2017 ). For instance, Facebook’s director of AI and a spokesman in the field, Yann LeCun, mentioned at a Conference at MIT on the Future of Work that machines are still far from having “the essence of intelligence.” That includes the ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true. Another way of saying this is that machines don’t have common sense ( Bergstein, 2017 ), like submarines that cannot swim ( van Belkom, 2019 ). When exclusive human capacities become our pivotal navigation points on the horizon we may miss some significant problems that may need our attention first.

To make this point clear, we first will provide some insight into the basic nature of both human and artificial intelligence. This is necessary for the substantiation of an adequate awareness of intelligence ( Intelligence Awareness ), and adequate research and education anticipating the development and application of A(G)I. For the time being, this is based on three essential notions that can (and should) be further elaborated in the near future.

• With regard to cognitive tasks, we are probably less smart than we think. So why should we vigorously focus on human -like AGI?

• Many different forms of intelligence are possible and general intelligence is therefore not necessarily the same as humanoid general intelligence (or “AGI on human level”).

• AGI is often not necessary; many complex problems can also be tackled effectively using multiple narrow AI’s. 1

We Are Probably Not so Smart as We Think

How intelligent are we actually? The answer to that question is determined to a large extent by the perspective from which this issue is viewed, and thus by the measures and criteria for intelligence that is chosen. For example, we could compare the nature and capacities of human intelligence with other animal species. In that case we appear highly intelligent. Thanks to our enormous learning capacity, we have by far the most extensive arsenal of cognitive abilities 2 to autonomously solve complex problems and achieve complex objectives. This way we can solve a huge variety of arithmetic, conceptual, spatial, economic, socio-organizational, political, etc. problems. The primates—which differ only slightly from us in genetic terms—are far behind us in that respect. We can therefore legitimately qualify humans, as compared to other animal species that we know, as highly intelligent.

Limited Cognitive Capacity

However, we can also look beyond this “ relative interspecies perspective” and try to qualify our intelligence in more absolute terms, i.e., using a scale ranging from zero to what is physically possible. For example, we could view the computational capacity of a human brain as a physical system ( Bostrom, 2014 ; Tegmark, 2017 ). The prevailing notion in this respect among AI scientists is that intelligence is ultimately a matter of information and computation, and (thus) not of flesh and blood and carbon atoms. In principle, there is no physical law preventing that physical systems (consisting of quarks and atoms, like our brain) can be built with a much greater computing power and intelligence than the human brain. This would imply that there is no insurmountable physical reason why machines one day cannot become much more intelligent than ourselves in all possible respects ( Tegmark, 2017 ). Our intelligence is therefore relatively high compared to other animals, but in absolute terms it may be very limited in its physical computing capacity, albeit only by the limited size of our brain and its maximal possible number of neurons and glia cells, (e.g. Kahle, 1979 ).

To further define and assess our own (biological) intelligence, we can also discuss the evolution and nature of our biological thinking abilities. As a biological neural network of flesh and blood, necessary for survival, our brain has undergone an evolutionary optimization process of more than a billion years. In this extended period, it developed into a highly effective and efficient system for regulating essential biological functions and performing perceptive-motor and pattern-recognition tasks, such as gathering food, fighting and flighting, and mating. Almost during our entire evolution, the neural networks of our brain have been further optimized for these basic biological and perceptual motor processes that also lie at the basis of our daily practical skills, like cooking, gardening, or household jobs. Possibly because of the resulting proficiency for these kinds of tasks we may forget that these processes are characterized by extremely high computational complexity, (e.g. Moravec, 1988 ). For example, when we tie our shoelaces, many millions of signals flow in and out through a large number of different sensor systems, from tendon bodies and muscle spindles in our extremities to our retina, otolithic organs and semi-circular channels in the head, (e.g. Brodal, 1981 ). This enormous amount of information from many different perceptual-motor systems is continuously, parallel, effortless and even without conscious attention, processed in the neural networks of our brain ( Minsky, 1986 ; Moravec, 1988 ; Grind, 1997 ). In order to achieve this, the brain has a number of universal (inherent) working mechanisms, such as association and associative learning ( Shatz, 1992 ; Bar, 2007 ), potentiation and facilitation ( Katz and Miledi, 1968 ; Bao et al., 1997 ), saturation and lateral inhibition ( Isaacson and Scanziani, 2011 ; Korteling et al., 2018a ).

These kinds of basic biological and perceptual-motor capacities have been developed and set down over many millions of years. Much later in our evolution—actually only very recently—our cognitive abilities and rational functions have started to develop. These cognitive abilities, or capacities, are probably less than 100 thousand years old, which may be qualified as “embryonal” on the time scale of evolution, (e.g. Petraglia and Korisettar, 1998 ; McBrearty and Brooks, 2000 ; Henshilwood and Marean, 2003 ). In addition, this very thin layer of human achievement has necessarily been built on these “ancient” neural intelligence for essential survival functions. So, our “higher” cognitive capacities are developed from and with these (neuro) biological regulation mechanisms ( Damasio, 1994 ; Korteling and Toet, 2020 ). As a result, it should not be a surprise that the capacities of our brain for performing these recent cognitive functions are still rather limited. These limitations are manifested in many different ways, for instance:

‐The amount of cognitive information that we can consciously process (our working memory, span or attention) is very limited ( Simon, 1955 ). The capacity of our working memory is approximately 10–50 bits per second ( Tegmark, 2017 ).

‐Most cognitive tasks, like reading text or calculation, require our full attention and we usually need a lot of time to execute them. Mobile calculators can perform millions times more complex calculations than we can ( Tegmark, 2017 ).

‐Although we can process lots of information in parallel, we cannot simultaneously execute cognitive tasks that require deliberation and attention, i.e., “multi-tasking” ( Korteling, 1994 ; Rogers and Monsell, 1995 ; Rubinstein, Meyer, and Evans, 2001 ).

‐Acquired cognitive knowledge and skills of people (memory) tend to decay over time, much more than perceptual-motor skills. Because of this limited “retention” of information we easily forget substantial portions of what we have learned ( Wingfield and Byrnes, 1981 ).

Ingrained Cognitive Biases

Our limited processing capacity for cognitive tasks is not the only factor determining our cognitive intelligence. Except for an overall limited processing capacity, human cognitive information processing shows systematic distortions. These are manifested in many cognitive biases ( Tversky and Kahneman, 1973 , Tversky and Kahneman, 1974 ). Cognitive biases are systematic, universally occurring tendencies, inclinations, or dispositions that skew or distort information processes in ways that make their outcome inaccurate, suboptimal, or simply wrong, (e.g. Lichtenstein and Slovic, 1971 ; Tversky and Kahneman, 1981 ). Many biases occur in virtually the same way in many different decision situations ( Shafir and LeBoeuf, 2002 ; Kahneman, 2011 ; Toet et al., 2016 ). The literature provides descriptions and demonstrations of over 200 biases. These tendencies are largely implicit and unconscious and feel quite naturally and self/evident when we are aware of these cognitive inclinations ( Pronin et al., 2002 ; Risen, 2015 ; Korteling et al., 2018b ). That is why they are often termed “intuitive” ( Kahneman and Klein, 2009 ) or “irrational” ( Shafir and LeBoeuf, 2002 ). Biased reasoning can result in quite acceptable outcomes in natural or everyday situations, especially when the time cost of reasoning is taken into account ( Simon, 1955 ; Gigerenzer and Gaissmaier, 2011 ). However, people often deviate from rationality and/or the tenets of logic, calculation, and probability in inadvisable ways ( Tversky and Kahneman, 1974 ; Shafir and LeBoeuf, 2002 ) leading to suboptimal decisions in terms of invested time and effort (costs) given the available information and expected benefits.

Biases are largely caused by inherent (or structural) characteristics and mechanisms of the brain as a neural network ( Korteling et al., 2018a ; Korteling and Toet, 2020 ). Basically, these mechanisms—such as association, facilitation, adaptation, or lateral inhibition—result in a modification of the original or available data and its processing, (e.g. weighting its importance). For instance, lateral inhibition is a universal neural process resulting in the magnification of differences in neural activity (contrast enhancement), which is very useful for perceptual-motor functions, maintaining physical integrity and allostasis, (i.e. biological survival functions). For these functions our nervous system has been optimized for millions of years. However, “higher” cognitive functions, like conceptual thinking, probability reasoning or calculation, have been developed only very recently in evolution. These functions are probably less than 100 thousand years old, and may, therefore, be qualified as “embryonal” on the time scale of evolution, (e.g. McBrearty and Brooks, 2000 ; Henshilwood and Marean, 2003 ; Petraglia and Korisettar, 2003 ). In addition, evolution could not develop these new cognitive functions from scratch, but instead had to build this embryonal, and thin layer of human achievement from its “ancient” neural heritage for the essential biological survival functions ( Moravec, 1988 ). Since cognitive functions typically require exact calculation and proper weighting of data, data transformations—like lateral inhibition—may easily lead to systematic distortions, (i.e. biases) in cognitive information processing. Examples of the large number of biases caused by the inherent properties of biological neural networks are: Anchoring bias (biasing decisions toward previously acquired information, Furnham and Boo, 2011 ; Tversky and Kahneman, 1973 , Tversky and Kahneman, 1974 ), the Hindsight bias (the tendency to erroneously perceive events as inevitable or more likely once they have occurred, Hoffrage et al., 2000 ; Roese and Vohs, 2012 ) the Availability bias (judging the frequency, importance, or likelihood of an event by the ease with which relevant instances come to mind, Tversky and Kahnemann, 1973 ; Tversky and Kahneman, 1974 ), and the Confirmation bias (the tendency to select, interpret, and remember information in a way that confirms one’s preconceptions, views, and expectations, Nickerson, 1998 ). In addition to these inherent (structural) limitations of (biological) neural networks, biases may also originate from functional evolutionary principles promoting the survival of our ancestors who, as hunter-gatherers, lived in small, close-knit groups ( Haselton et al., 2005 ; Tooby and Cosmides, 2005 ). Cognitive biases can be caused by a mismatch between evolutionarily rationalized “heuristics” (“evolutionary rationality”: Haselton et al., 2009 ) and the current context or environment ( Tooby and Cosmides, 2005 ). In this view, the same heuristics that optimized the chances of survival of our ancestors in their (natural) environment can lead to maladaptive (biased) behavior when they are used in our current (artificial) settings. Biases that have been considered as examples of this kind of mismatch are the Action bias (preferring action even when there is no rational justification to do this, Baron and Ritov, 2004 ; Patt and Zeckhauser, 2000 ), Social proof (the tendency to mirror or copy the actions and opinions of others, Cialdini, 1984 ), the Tragedy of the commons (prioritizing personal interests over the common good of the community, Hardin, 1968 ), and the Ingroup bias (favoring one’s own group above that of others, Taylor and Doria, 1981 ).

This hard-wired (neurally inherent and/or evolutionary ingrained) character of biased thinking makes it unlikely that simple and straightforward methods like training interventions or awareness courses will be very effective to ameliorate biases. This difficulty of bias mitigation seems indeed supported by the literature ( Korteling et al., 2021 ).

General Intelligence Is Not the Same as Human-like Intelligence

Fundamental differences between biological and artificial intelligence.

We often think and deliberate about intelligence with an anthropocentric conception of our own intelligence in mind as an obvious and unambiguous reference. We tend to use this conception as a basis for reasoning about other, less familiar phenomena of intelligence, such as other forms of biological and artificial intelligence ( Coley and Tanner, 2012 ). This may lead to fascinating questions and ideas. An example is the discussion about how and when the point of “intelligence at human level” will be achieved. For instance, Ackermann. (2018) writes: “Before reaching superintelligence, general AI means that a machine will have the same cognitive capabilities as a human being”. So, researchers deliberate extensively about the point in time when we will reach general AI, (e.g., Goertzel, 2007 ; Müller and Bostrom, 2016 ). We suppose that these kinds of questions are not quite on target. There are (in principle) many different possible types of (general) intelligence conceivable of which human-like intelligence is just one of those. This means, for example that the development of AI is determined by the constraint of physics and technology, and not by those of biological evolution. So, just as the intelligence of a hypothetical extraterrestrial visitor of our planet earth is likely to have a different (in-)organic structure with different characteristics, strengths, and weaknesses, than the human residents this will also apply to artificial forms of (general) intelligence. Below we briefly summarize a few fundamental differences between human and artificial intelligence ( Bostrom, 2014 ):

‐Basic structure: Biological (carbon) intelligence is based on neural “wetware” which is fundamentally different from artificial (silicon-based) intelligence. As opposed to biological wetware, in silicon, or digital, systems “hardware” and “software” are independent of each other ( Kosslyn and Koenig, 1992 ). When a biological system has learned a new skill, this will be bounded to the system itself. In contrast, if an AI system has learned a certain skill then the constituting algorithms can be directly copied to all other similar digital systems.

‐Speed: Signals from AI systems propagate with almost the speed of light. In humans, the conduction velocity of nerves proceeds with a speed of at most 120 m/s, which is extremely slow in the time scale of computers ( Siegel and Sapru, 2005 ).

‐Connectivity and communication: People cannot directly communicate with each other. They communicate via language and gestures with limited bandwidth. This is slower and more difficult than the communication of AI systems that can be connected directly to each other. Thanks to this direct connection, they can also collaborate on the basis of integrated algorithms.

‐Updatability and scalability: AI systems have almost no constraints with regard to keep them up to date or to upscale and/or re-configure them, so that they have the right algorithms and the data processing and storage capacities necessary for the tasks they have to carry out. This capacity for rapid, structural expansion and immediate improvement hardly applies to people.

‐In contrast, biology does a lot with a little: organic brains are millions of times more efficient in energy consumption than computers. The human brain consumes less energy than a lightbulb, whereas a supercomputer with comparable computational performance uses enough electricity to power quite a village ( Fischetti, 2011 ).

These kinds of differences in basic structure, speed, connectivity, updatability, scalability, and energy consumption will necessarily also lead to different qualities and limitations between human and artificial intelligence. Our response speed to simple stimuli is, for example, many thousands of times slower than that of artificial systems. Computer systems can very easily be connected directly to each other and as such can be part of one integrated system. This means that AI systems do not have to be seen as individual entities that can easily work alongside each other or have mutual misunderstandings. And if two AI systems are engaged in a task then they run a minimal risk to make a mistake because of miscommunications (think of autonomous vehicles approaching a crossroad). After all, they are intrinsically connected parts of the same system and the same algorithm ( Gerla et al., 2014 ).

Complexity and Moravec’s Paradox

Because biological, carbon-based, brains and digital, silicon-based, computers are optimized for completely different kinds of tasks (e.g., Moravec, 1988 ; Korteling et al., 2018b ), human and artificial intelligence show fundamental and probably far-stretching differences. Because of these differences it may be very misleading to use our own mind as a basis, model or analogy for reasoning about AI. This may lead to erroneous conceptions, for example about the presumed abilities of humans and AI to perform complex tasks. Resulting flaws concerning information processing capacities emerge often in the psychological literature in which “complexity” and “difficulty” of tasks are used interchangeably (see for examples: Wood et al., 1987 ; McDowd and Craik, 1988 ). Task complexity is then assessed in an anthropocentric way, that is: by the degree to which we humans can perform or master it. So, we use the difficulty to perform or master a task as a measure of its complexity , and task performance (speed, errors) as a measure of skill and intelligence of the task performer. Although this could sometimes be acceptable in psychological research, this may be misleading if we strive for understanding the intelligence of AI systems. For us it is much more difficult to multiply two random numbers of six digits than to recognize a friend on a photograph. But when it comes to counting or arithmetic operations, computers are thousands of times faster and better, while the same systems have only recently taken steps in image recognition (which only succeeded when deep learning technology, based on some principles of biological neural networks, was developed). In general: cognitive tasks that are relatively difficult for the human brain (and which we therefore find subjectively difficult) do not have to be computationally complex, (e.g., in terms of objective arithmetic, logic, and abstract operations). And vice versa: tasks that are relatively easy for the brain (recognizing patterns, perceptual-motor tasks, well-trained tasks) do not have to be computationally simple. This phenomenon, that which is easy for the ancient, neural “technology” of people and difficult for the modern, digital technology of computers (and vice versa) has been termed the moravec’s Paradox. Hans Moravec (1988) wrote: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

Human Superior Perceptual-Motor Intelligence

Moravec’s paradox implies that biological neural networks are intelligent in different ways than artificial neural networks. Intelligence is not limited to the problems or goals that we as humans, equipped with biological intelligence, find difficult ( Grind, 1997 ). Intelligence, defined as the ability to realize complex goals or solve complex problems, is much more than that. According to Moravec (1988) high-level reasoning requires very little computation, but low-level perceptual-motor skills require enormous computational resources. If we express the complexity of a problem in terms of the number of elementary calculations needed to solve it, then our biological perceptual motor intelligence is highly superior to our cognitive intelligence. Our organic perceptual-motor intelligence is especially good at associative processing of higher-order invariants (“patterns”) in the ambient information. These are computationally more complex and contain more information than the simple, individual elements ( Gibson, 1966 , Gibson, 1979 ). An example of our superior perceptual-motor abilities is the Object Superiority Effect : we perceive and interpret whole objects faster and more effective than the (more simple) individual elements that make up these objects ( Weisstein and Harris, 1974 ; McClelland, 1978 ; Williams and Weisstein, 1978 ; Pomerantz, 1981 ). Thus, letters are also perceived more accurately when presented as part of a word than when presented in isolation, i.e. the Word superiority effect, (e.g. Reicher, 1969 ; Wheeler, 1970 ). So, the difficulty of a task does not necessarily indicate its inherent complexity . As Moravec (1988) puts it: “We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

The Supposition of Human-like AGI

So, if there would exist AI systems with general intelligence that can be used for a wide range of complex problems and objectives, those AGI machines would probably have a completely different intelligence profile, including other cognitive qualities, than humans have ( Goertzel, 2007 ). This will be even so, if we manage to construct AI agents who display similar behavior like us and if they are enabled to adapt to our way of thinking and problem-solving in order to promote human-AI teaming. Unless we decide to deliberately degrade the capabilities of AI systems (which would not be very smart), the underlying capacities and abilities of man and machines with regard to collection and processing of information, data analysis, probability reasoning, logic, memory capacity etc. will still remain dissimilar. Because of these differences we should focus at systems that effectively complement us, and that make the human-AI system stronger and more effective. Instead of pursuing human-level AI it would be more beneficial to focus on autonomous machines and (support) systems that fill in, or extend on, the manifold gaps of human cognitive intelligence. For instance, whereas people are forced—by the slowness and other limitations of biological brains—to think heuristically in terms of goals, virtues, rules and norms expressed in (fuzzy) language, AI has already established excellent capacities to process and calculate directly on highly complex data. Therefore, or the execution of specific (narrow) cognitive tasks (logical, analytical, computational), modern digital intelligence may be more effective and efficient than biological intelligence. AI may thus help to produce better answers for complex problems using high amounts of data, consistent sets of ethical principles and goals, probabilistic-, and logic reasoning, (e.g. Korteling et al., 2018b ). Therefore, we conjecture that ultimately the development of AI systems for supporting human decision making may appear the most effective way leading to the making of better choices or the development of better solutions on complex issues. So, the cooperation and division of tasks between people and AI systems will have to be primarily determinated by their mutually specific qualities. For example, tasks or task components that appeal to capacities in which AI systems excel, will have to be less (or less fully) mastered by people, so that less training will probably be required. AI systems are already much better than people at logically and arithmetically correct gathering (selecting) and processing (weighing, prioritizing, analyzing, combining) large amounts of data. They do this quickly, accurately and reliably. They are also more stable (consistent) than humans, have no stress and emotions and have a great perseverance and a much better retention of knowledge and skills than people. As a machine, they serve people completely and without any “self-interest” or “own hidden agenda.” Based on these qualities AI systems may effectively take over tasks, or task components, from people. However, it remains important that people continue to master those tasks to a certain extent, so that they can take over tasks or take adequate action if the machine system fails.

In general, people are better suited than AI systems for a much broader spectrum of cognitive and social tasks under a wide variety of (unforeseen) circumstances and events ( Korteling et al., 2018b ). People are also better at the social-psychosocial interaction for the time being. For example, it is difficult for AI systems to interpret human language and -symbolism. This requires a very extensive frame of reference, which, at least until now and for the near future, is difficult to achieve within AI. As a result of all these differences, people are still better at responding (as a flexible team) to unexpected and unpredictable situations and creatively devising possibilities and solutions in open and ill-defined tasks and across a wide range of different, and possibly unexpected, circumstances. People will have to make extra use of their specific human qualities, (i.e. what people are relatively good at) and train to improve relevant competencies. In addition, human team members will have to learn to deal well with the overall limitations of AIs. With such a proper division of tasks, capitalizing on the specific qualities and limitations of humans and AI systems, human decisional biases may be circumvented and better performance may be expected. This means that enhancement of a team with intelligent machines having less cognitive constraints and biases, may have more surplus value than striving at collaboration between humans and AI that have developed the same (human) biases. Although cooperation in teams with AI systems may need extra training in order to effectively deal with this bias-mismatch, this heterogeneity will probably be better and safer. This also opens up the possibility of a combination of high levels of meaningful human control AND high levels of automation which is likely to produce the most effective and safe human-AI systems ( Elands et al., 2019 ; Shneiderman, 2020a ). In brief: human intelligence is not the golden standard for general intelligence; instead of aiming at human-like AGI, the pursuit of AGI should thus focus on effective digital/silicon AGI in conjunction with an optimal configuration and allocation of tasks.

Explainability and Trust

Developments in relation to artificial learning, or deep (reinforcement) learning, in particular have been revolutionary. Deep learning simulates a network resembling the layered neural networks of our brain. Based on large quantities of data, the network learns to recognize patterns and links to a high level of accuracy and then connect them to courses of action without knowing the underlying causal links. This implies that it is difficult to provide deep learning AI with some kind of transparency in how or why it has made a particular choice by, for example, by expressing an intelligible reasoning (for humans) about its decision process, like we do, (e.g. Belkom, 2019 ). In addition, reasoning about decisions like humans do is a very malleable and ad hoc process (at least in humans). Humans are generally unaware of their implicit cognitions or attitudes, and therefore not be able to adequately report on them. It is therefore rather difficult for many humans to introspectively analyze their mental states, as far as these are conscious, and attach the results of this analysis to verbal labels and descriptions, (e.g. Nosek et al. (2011) . First, the human brain hardly reveals how it creates conscious thoughts, (e.g. Feldman-Barret, 2017 ). What it actually does is giving us the illusion that its products reveal its inner workings. In other words: our conscious thoughts tell us nothing about the way in which these thoughts came about. There is also no subjective marker that distinguishes correct reasoning processes from erroneous ones ( Kahneman and Klein, 2009 ). The decision maker therefore has no way to distinguish between correct thoughts, emanating from genuine knowledge and expertize, and incorrect ones following from inappropriate neuro-evolutionary processes, tendencies, and primal intuitions. So here we could ask the question: isn’t it more trustworthy to have a real black box, than to listen to a confabulating one? In addition, according to Werkhoven et al. (2018) demanding explainability observability, or transparency ( Belkom, 2019 ; van den Bosch et al., 2019 ) may cause artificial intelligent systems to constrain their potential benefit for human society, to what can be understood by humans.

Of course we should not blindly trust the results generated by AI. Like other fields of complex technology, (e.g. Modeling & Simulation), AI systems need to be verified (meeting specifications) and validated (meeting the systems’ goals) with regard to the objectives for which the system was designed. In general, when a system is properly verified and validated, it may be considered safe, secure and fit for purpose. It therefore deserves our trust for (logically) comprehensible and objective reasons (although mistakes still can happen). Likewise people trust in the performance of aero planes and cell phones despite we are almost completely ignorant about their complex inner processes. Like our own brains, artificial neural networks are fundamentally intransparant ( Nosek et al., 2011 ; Feldman-Barret, 2017 ). Therefore, trust in AI should be primarily based on its objective performance. This forms a more important base than providing trust on the basis of subjective (trickable) impressions, stories, or images aimed at belief and appeal to the user. Based on empirical validation research, developers and users can explicitly verify how well the system is doing with respect to the set of values and goals for which the machine was designed. At some point, humans may want to trust that goals can be achieved against less cost and better outcomes, when we accept solutions even if they may be less transparent for humans ( Werkhoven et al., 2018 ).

The Impact of Multiple Narrow AI Technology

Agi as the holy grail.

AGI, like human general intelligence, would have many obvious advantages, compared to narrow (limited, weak, specialized) AI. An AGI system would be much more flexible and adaptive. On the basis of generic training and reasoning processes it would understand autonomously how multiple problems in all kinds of different domains can be solved in relation to their context, (e.g. Kurzweil, 2005 ). AGI systems also require far fewer human interventions to accommodate the various loose ends among partial elements, facets, and perspectives in complex situations. AGI would really understand problems and is capable to view them from different perspectives (as people—ideally—also can do). A characteristic of the current (narrow) AI tools is that they are skilled in a very specific task, where they can often perform at superhuman levels, (e.g. Goertzel, 2007 ; Silver et al., 2017 ). These specific tasks have been well-defined and structured. Narrow AI systems are less suitable, or totally unsuitable, for tasks or task environments that offer little structure, consistency, rules or guidance, in which all sorts of unexpected, rare or uncommon events, (e.g. emergencies) may occur. Knowing and following fixed procedures usually does not lead to proper solutions in these varying circumstances. In the context of (unforeseen) changes in goals or circumstances, the adequacy of current AI is considerably reduced because it cannot reason from a general perspective and adapt accordingly ( Lake et al., 2017 ; Horowitz, 2018 ). As with narrow AI systems, people are then needed to supervise on these deviations in order to enable flexible and adaptive system performance. Therefore the quest of AGI may be considered as looking for a kind of holy grail.

Multiple Narrow AI is Most Relevant Now!

The potential high prospects of AGI, however, do not imply that AGI will be the most crucial factor in future AI R&D, at least for the short- and mid-term. When reflecting on the great potential benefits of general intelligence, we tend to consider narrow AI applications as separate entities that can very well be outperformed by a broader AGI that presumably can deal with everything. But just as our modern world has evolved rapidly through a diversity of specific (limited) technological innovations, at the system level the total and wide range of emerging AI applications will also have a groundbreaking technological and societal impact ( Peeters et al., 2020 ). This will be all the more relevant for the future world of big data, in which everything is connected to everything through the Internet of Things . So, it will be much more profitable and beneficial to develop and build (non-human-like) AI variants that will excel in areas where people are inherently limited. It seems not too far-fetched to suppose that the multiple variants of narrow AI applications also gradually get more broadly interconnected. In this way, a development toward an ever broader realm of integrated AI applications may be expected. In addition, it is already possible to train a language model AI (Generative Pre-trained Transformer3, GPT-3) with a gigantic dataset and then have it learn various tasks based on a handful of examples—one or few-shot learning. GPT-3 (developed by OpenAI) can do this with language-related tasks, but there is no reason why this should not be possible with image and sound, or with combinations of these three ( Brown, 2020 ).

Besides, the moravec Paradox implies that the development of AI “partners” with many kinds of human (-level) qualities will be very difficult to obtain, whereas their added value, (i.e. beyond the boundaries of human capabilities) will be relatively low. The most fruitful AI applications will mainly involve supplementing human constraints and limitations. Given the present incentives for competitive technological progress, multiple forms of (connected) narrow AI systems will be the major driver of AI impact on our society for short- and mid-term. For the near future, this may imply that AI applications will remain very different from, and in many aspects almost incomparable with, human agents. This is likely to be true even if the hypothetical match of artificial general intelligence (AGI) with human cognition were to be achieved in the future in the longer term. Intelligence is a multi-dimensional (quantitative, qualitative) concept. All dimensions of AI unfold and grow along their own different path with their own dynamics. Therefore, over time an increasing number of specific (narrow) AI capacities may gradually match, overtake and transcend human cognitive capacities. Given the enormous advantages of AI, for example in the field of data availability and data processing capacities, the realization of AGI probably would at the same time outclass human intelligence in many ways. Which implies that the hypothetical point of time of matching human- and artificial cognitive capacities, i.e. human-level AGI, will probably be hard to define in a meaningful way ( Goertzel, 2007 ). 3

So when AI will truly understand us as a “friend,” “partner,” “alter ego” or “buddy,” as we do when we collaborate with other humans as humans, it will surpass us in many areas at the same Moravec (1998) time. It will have a completely different profile of capacities and abilities and thus it will not be easy to really understand the way it “thinks” and comes to its decisions. In the meantime, however, as the capacities of robots expand and move from simple tools to more integrated systems, it is important to calibrate our expectations and perceptions toward robots appropriately. So, we will have to enhance our awareness and insight concerning the continuous development and progression of multiple forms of (integrated) AI systems. This concerns for example the multi-facetted nature of intelligence. Different kind of agents may have different combinations of intelligences of very different levels. An agent with general intelligence may for example be endowed with excellent abilities on the area of image recognition and navigation, calculation, and logical reasoning while at the same time being dull on the area of social interaction and goal-oriented problem solving. This awareness of the multi-dimensional nature of intelligence also concerns the way we have to deal with ( and capitalize on) anthropomorphism. That is the human tendency in human-robot interaction to characterize non-human artifacts that superficially look similar to us as possessing human-like traits, emotions, and intentions, (e.g., Kiesler and Hinds, 2004 ; Fink, 2012 ; Haring et al., 2018 ). Insight into these human factors issues is crucial to optimize the utility, performance and safety of human-AI systems ( Peeters et al., 2020 ).

From this perspective, the question whether or not “AGI at the human level” will be realized is not the most relevant question for the time being. According to most AI scientists, this will certainly happen, and the key question is not IF this will happen, but WHEN, (e.g., Müller and Bostrom, 2016 ). At a system level, however, multiple narrow AI applications are likely to overtake human intelligence in an increasingly wide range of areas.

Conclusions and Framework

The present paper focused on providing some more clarity and insight into the fundamental characteristics, differences and idiosyncrasies of human and artificial intelligences. First we presented ideas and arguments to scale up and differentiate our conception of intelligence, whether this may be human or artificial. Central to this broader, multi-faceted, conception of intelligence is the notion that intelligence in itself is a matter of information and computation, independent of its physical substrate. However, the nature of this physical substrate (biological/carbon or digital/silicon), will substantially determine its potential envelope of cognitive abilities and limitations. Organic cognitive faculties of humans have been very recently developed during the evolution of mankind. These “embryonal” faculties have been built on top of a biological neural network apparatus that has been optimized for allostasis and (complex) perceptual motor functions. Human cognition is therefore characterized by various structural limitations and distortions in its capacity to process certain forms of non-biological information. Biological neural networks are, for example, not very capable of performing arithmetic calculations, for which my pocket calculator fits millions of times better. These inherent and ingrained limitations, that are due to the biological and evolutionary origin of human intelligence, may be termed “hard-wired.”

In line with the Moravic’s paradox , we argued that intelligent behavior is more than what we, as homo sapiens, find difficult. So we should not confuse task-difficulty (subjective, anthropocentric) with task-complexity (objective). Instead we advocated a versatile conceptualization of intelligence and an acknowledgment of its many possible forms and compositions. This implies a high variety in types of biological or other forms of high (general) intelligence with a broad range of possible intelligence profiles and cognitive qualities (which may or may not surpass ours in many ways). This would make us better aware of the most probable potentials of AI applications for the short- and medium-term future. For example, from this perspective, our primary research focus should be on those components of the intelligence spectrum that are relatively difficult for the human brain and relatively easy for machines. This involves primarily the cognitive component requiring calculation, arithmetic analysis, statistics, probability calculation, data analysis, logical reasoning, memorization, et cetera.

In line with this we have advocated a modest, more humble, view of our human, general intelligence. Which also implies that human-level AGI should not be considered as the “golden standard” of intelligence (to be pursued with foremost priority). Because of the many fundamental differences between natural and artificial intelligences, human-like AGI will be very difficult to accomplish in the first place (and also with relatively limited added value). In case an AGI will be accomplished in the (far) future it will therefore probably have a completely different profile of cognitive capacities and abilities than we, as humans, have. When such an AGI has come so far that it is able to “collaborate” like a human, it will at the same time be likely that can in many respects already function at highly superior levels relative to what we are able to. For the time being, however, it will not be very realistic and useful to aim at AGI that includes the broad scope of human perceptual-motor and cognitive abilities. Instead, the most profitable AI applications for the short- and mid-term future, will probably be based on multiple narrow AI systems. These multiple narrow AI applications may catch up with human intelligence in an increasingly broader range of areas.

From this point of view we advocate not to dwell too intensively on the AGI question, whether or when AI will outsmart us, take our jobs, or how to endow it with all kinds of human abilities. Given the present state of the art it may be wise to focus more on the whole system of multiple AI innovations with humans as a crucial connecting and supervising factor. This also implies the establishment and formalization of legal boundaries and proper (effective, ethical, safe) goals for AI systems ( Elands et al., 2019 ; Aliman, 2020 ). So this human factor (legislator, user, “collaborator”) needs to have good insight into the characteristics and capacities of biological and artificial intelligence (under all sorts of tasks and working conditions). Both in the workplace and in policy making the most fruitful AI applications will be to complement and compensate for the inherent biological and cognitive constraints of humans. For this reason, prominent issues concern how to use it intelligently? For what tasks and under what conditions decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the strengths of human intelligence and how to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition. See ( Hoffman and Johnson, 2019 ; Shneiderman, 2020a ; Shneiderman, 2020b ) for recent overviews.

In summary: No matter how intelligent autonomous AI agents become in certain respects, at least for the foreseeable future, they will remain unconscious machines. These machines have a fundamentally different operating system (biological vs digital) with correspondingly different cognitive abilities and qualities than people and other animals. So, before a proper “team collaboration” can start, the human team members will have to understand these kinds of differences, i.e., how human information processing and intelligence differs from that of–the many possible and specific variants of—AI systems. Only when humans develop a proper of these “interspecies” differences they can effectively capitalize on the potential benefits of AI in (future) human-AI teams. Given the high flexibility, versatility, and adaptability of humans relative to AI systems, the first challenge becomes then how to ensure human adaptation to the more rigid abilities of AI? 4 In other words: how can we achieve a proper conception the differences between human- and artificial intelligence?

Framework for Intelligence Awareness Training

For this question, the issue of Intelligence Awareness in human professionals needs to be addressed more vigorously. Next to computer tools for the distribution of relevant awareness information ( Collazos et al., 2019 ) in human-machine systems, this requires better education and training on how to deal with the very new and different characteristics, idiosyncrasies, and capacities of AI systems. This includes, for example, a proper understanding of the basic characteristics, possibilities, and limitations of the AI’s cognitive system properties without anthropocentric and/or anthropomorphic misconceptions. In general, this “Intelligence Awareness” is highly relevant in order to better understand, investigate, and deal with the manifold possibilities and challenges of machine intelligence. This practical human-factors challenge could, for instance, be tackled by developing new, targeted and easily configurable (adaptive) training forms and learning environments for human-AI systems. These flexible training forms and environments, (e.g. simulations and games) should focus at developing knowledge, insight and practical skills concerning the specific, non-human characteristics, abilities, and limitations of AI systems and how to deal with these in practical situations. People will have to understand the critical factors determining the goals, performance, and choices of AI? Which may in some cases even include the simple notion that AIs excite as much about their performance in achieving their goals as your refrigerator does for keeping your milkshake well. They have to learn when and under what conditions decisions are safe to leave to AI and when is human judgment required or essential? And more in general: how does it “think” and decide? The relevance of this kind of knowledge, skills and practices will only become bigger when the degree of autonomy (and genericity) of advanced AI systems will grow.

What does such an Intelligence Awareness training curriculum look like? It needs to include at least a module on the cognitive characteristics of AI. This is basically a subject similar to those subjects that are also included in curricula on human cognition. This broad module on the “Cognitive Science of AI” may involve a range of sub-topics starting with a revision of the concept of "Intelligence" stripped of anthropocentric and anthropomorphic misunderstandings. In addition, this module should focus at providing knowledge about the structure and operation of the AI operating system or the “AI mind.” This may be followed by subjects like: Perception and interpretation of information by AI, AI cognition (memory, information processing, problem solving, biases), dealing with AI possibilities and limitations in the “human” areas like creativity, adaptivity, autonomy, reflection, and (self-) awareness, dealing with goal functions (valuation of actions in relation to cost-benefit), AI ethics and AI security. In addition, such a curriculum should include technical modules providing insight into the working of the AI operating system. Due to the enormous speed with which the AI technology and application develops, the content of such a curriculum is also very dynamic and continuously evolving on the basis of technological progress. This implies that the curriculum and training-aids and -environments should be flexible, experiential, and adaptive, which makes the work form of serious gaming ideally suited. Below, we provide a global framework for the development of new educational curricula on AI awareness. These subtopics go beyond learning to effectively “operate,” “control” or interact with specific AI applications (i.e. conventional human-machine interaction):

‐Understanding of underlying system characteristics of the AI (the “AI brain”). Understanding the specific qualities and limitations of AI relative to human intelligence.

‐Understanding the complexity of the tasks and of the environment from the perspective of AI systems.

‐Understanding the problem of biases in human cognition, relative to biases in AI.

‐Understanding the problems associated with the control of AI, predictability of AI behavior (decisions), building trust, maintaining situation awareness (complacency), dynamic task allocation, (e.g. taking over each other’s tasks) and responsibility (accountability).

‐How to deal with possibilities and limitations of AI in the field of “creativity”, adaptability of AI, “environmental awareness”, and generalization of knowledge.

‐Learning to deal with perceptual and cognitive limitations and possible errors of AI which may be difficult to comprehend.

‐Trust in the performance of AI (possibly in spite of limited transparency or ability to “explain”) based on verification and validation.

‐Learning to deal with our natural inclination to anthropocentrism and anthropomorphism (“theory of mind”) when reasoning about human-robot interaction.

‐How to capitalize on the powers of AI in order to deal with the inherent constraints of human information processing (and vice versa).

‐Understanding the specific characteristics and qualities of the man-machine system and being able to decide on when, for what, and how the integrated combination of human- and AI faculties may perform at best overall system potential.

In conclusion: due to the enormous speed with which the AI technology and application evolves we need a more versatile conceptualization of intelligence and an acknowledgment of its many possible forms and combinations. A revised conception of intelligence includes also a good understanding of the basic characteristics, possibilities, and limitations of different (biological, artificial) cognitive system properties without anthropocentric and/or anthropomorphic misconceptions. This “Intelligence Awareness” is highly relevant in order to better understand and deal with the manifold possibilities and challenges of machine intelligence, for instance to decide when to use or deploy AI in relation to tasks and their context. The development of educational curricula with new, targeted, and easily configurable training forms and learning environments for human-AI systems are therefore recommended. Further work should focus on training tools, methods and content that are flexible and adaptive enough to be able to keep up with the rapid changes in the field of AI and with the wide variety of target groups and learning goals.

Author Contributions

The literature search, analysis, conceptual work, and the writing of the manuscript was done by JEK. All authors listed have made substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors want to thank J. van Diggelen, L.J.H.M. Kester for their useful inputs for this manuscript. The present paper was a deliverable of 1) the BIHUNT program (Behavioral Impact of NIC Teaming, V1719) funded by the Dutch Ministry of Defense and of the Wise Policy Making program funded by the Netherlands Organization for Applied Scientific Research (TNO).

1 Narrow AI can be defined as the production of systems displaying intelligence regarding specific, highly constrained tasks, like playing chess, facial recognition, autonomous navigation, or locomotion ( Goertzel et al., 2014 ).

2 Cognitive abilities involve deliberate, conceptual or analytic thinking (e.g., calculation, statistics, analysis, reasoning, abstraction)

3 Unless of course AI will be deliberately constrained or degraded to human-level functioning.

4 Next to the issue of Human-Aware AI, i.e. tuning AI to the cognitive characteristics of humans.

Ackermann, N. (2018). Artificial Intelligence Framework: a visual introduction to machine learning and AI Retrieved from: https://towardsdatascience.com/artificial-intelligence-framework-a-visual-introduction-to-machine-learning-and-ai-d7e36b304f87 . (September 9, 2019).

Aliman, N-M. (2020). Hybrid cognitive-affective Strategies for AI safety . PhD thesis . Utrecht, Netherlands: Utrecht University . doi:10.33540/203

CrossRef Full Text

Bao, J. X., Kandel, E. R., and Hawkins, R. D. (1997). Involvement of pre- and postsynaptic mechanisms in posttetanic potentiation at Aplysia synapses. Science 275, 969–973. doi:10.1126/science.275.5302.969Dane

PubMed Abstract | CrossRef Full Text | Google Scholar

Bar, M. (2007). The proactive brain: using analogies and associations to generate predictions. Trends Cogn. Sci. 11, 280–289. doi:10.1016/j.tics.2007.05.005

Baron, J., and Ritov, I. (2004). Omission bias, individual differences, and normality. Organizational Behav. Hum. Decis. Process. 94, 74–85. doi:10.1016/j.obhdp.2004.03.003

CrossRef Full Text | Google Scholar

Belkom, R. v. (2019). Duikboten zwemmen niet: de zoektocht naar intelligente machines. Den Haag: Stichting Toekomstbeeld der Techniek (STT) .

Google Scholar

Bergstein, B. (2017). AI isn’t very smart yet. But we need to get moving to make sure automation works for more people . Cambridge, MA, United States: MIT Technology Retrieved from: https://www.technologyreview.com/s/609318/the-great-ai-paradox/

Bieger, J. B., Thorisson, K. R., and Garrett, D. (2014). “Raising AI: tutoring matters,” in 7th international conference, AGI 2014 quebec city, QC, Canada, august 1–4, 2014 proceedings . Editors B. Goertzel, L. Orseau, and J. Snaider (Berlin, Germany: Springer ). doi:10.1007/978-3-319-09274-4

Boden, M. (2017). Principles of robotics: regulating robots in the real world. Connect. Sci. 29 (2), 124–129.

Bostrom, N. (2014). Superintelligence: pathts, dangers, strategies . Oxford United Kingdom: Oxford University Press .

Bradshaw, J. M., Dignum, V., Jonker, C. M., and Sierhuis, M. (2012). Introduction to special issue on human-agent-robot teamwork. IEEE Intell. Syst. 27, 8–13. doi:10.1109/MIS.2012.37

Brodal, A. (1981). Neurological anatomy in relation to clinical medicine . New York, NY, United States: Oxford University Press .

Brown, T. B. (2020). Language models are few-shot learners, arXiv 2005, 14165v4.

Cialdini, R. D. (1984). Influence: the psychology of persuation . New York, NY, United States: Harper .

Coley, J. D., and Tanner, K. D. (2012). Common origins of diverse misconceptions: cognitive principles and the development of biology thinking. CBE Life Sci. Educ. 11 (3), 209–215. doi:10.1187/cbe.12-06-0074

Collazos, C. A., Gutierrez, F. L., Gallardo, J., Ortega, M., Fardoun, H. M., and Molina, A. I. (2019). Descriptive theory of awareness for groupware development. J. Ambient Intelligence Humanized Comput. 10, 4789–4818. doi:10.1007/s12652-018-1165-9

Damasio, A. R. (1994). Descartes’ error: emotion, reason and the human brain . New York, NY, United States: G. P. Putnam’s Sons .

Elands, P., HuizingKester, A. L., Oggero, S., and Peeters, M. (2019). Governing ethical and effective behavior of intelligent systems: a novel framework for meaningful human control in a military context. Militaire Spectator 188 (6), 302–313.

Feldman-Barret, L. (2017). How emotions are made: the secret life of the brain . Boston, MA, United States: Houghton Mifflin Harcourt .

Fink, J. (2012). “Anthropomorphism and human likeness in the design of robots and human-robot interaction,” in Social robotics. ICSR 2012 . Lecture notes in computer science . Editors S. S. Ge, O. Khatib, J. J. Cabibihan, R. Simmons, and M. A. Williams (Berlin, Germany: Springer ), 7621. doi:10.1007/978-3-642-34103-8_20

Fischetti, M. (2011). Computers vs brains. Scientific American 175 th anniversary issue Retreived from: https://www.scientificamerican.com/article/computers-vs-brains/ .

Furnham, A., and Boo, H. C. (2011). A literature review of the anchoring effect. The J. Socio-Economics 40, 35–42. doi:10.1016/j.socec.2010.10.008

Gerla, M., Lee, E-K., and Pau, G. (2014). Internet of vehicles: from intelligent grid to autonomous cars and vehicular clouds. WF-IoT 12, 241–246. doi:10.1177/1550147716665500

Gibson, J. J. (1979). The ecological approach to visual perception . Boston, MA, United States: Houghton Mifflin .

Gibson, J. J. (1966). The senses considered as perceptual systems . Boston, MA, United States: Houghton Mifflin.

Gigerenzer, G., and Gaissmaier, W. (2011). Heuristic decision making. Annu. Rev. Psychol. 62, 451–482. doi:10.1146/annurev-psych-120709-145346

Goertzel, B. (2007). Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's the singularity is near, and McDermott’s critique of Kurzweil. Artif. Intelligence 171 (18), 1161–1173. doi:10.1016/j.artint.2007.10.011

Goertzel, B., Orseau, L., and Snaider, J., (Editors). (2014). Preface. 7th international conference, AGI 2014 Quebec City, QC, Canada, August 1–4, 2014 Proceedings Springer .

Grind, W. A. van. de. (1997). Natuurlijke intelligentie: over denken, intelligentie en bewustzijn van mensen en andere dieren . 2nd edn. Amsterdam, Netherlands: Nieuwezijds Retrieved from https://www.nieuwezijds.nl/boek/natuurlijke-intelligentie/ . (July 9, 2019).

Hardin, G. (1968). The tragedy of the commons. The population problem has no technical solution; it requires a fundamental extension in morality. Science 162, 1243–1248. doi:10.1126/science.162.3859.1243

Haring, K. S., Watanabe, K., Velonaki, M., Tosell, C. C., and Finomore, V. (2018). Ffab—the form function attribution bias in human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 10 (4), 843–851. doi:10.1109/TCDS.2018.2851569

Haselton, M. G., Bryant, G. A., Wilke, A., Frederick, D. A., Galperin, A., Frankenhuis, W. E., et al. (2009). Adaptive rationality: an evolutionary perspective on cognitive bias. Soc. Cogn. 27, 733–762. doi:10.1521/soco.2009.27.5.733

Haselton, M. G., Nettle, D., and Andrews, P. W. (2005). “The evolution of cognitive bias,” in The handbook of evolutionary psychology . Editor D.M. Buss (Hoboken, NJ, United States: John Wiley & Sons ), 724–746.

Henshilwood, C., and Marean, C. (2003). The origin of modern human behavior. Curr. Anthropol. 44 (5), 627–651. doi:10.1086/377665

Hoffman, R. R., and Johnson, M. (2019). “The quest for alternatives to “levels of automation” and “task allocation,” in Human performance in automated and autonomous systems . Editors M. Mouloua, and P. A. Hancock (Boca Raton, FL, United States: CRC Press ), 43–68.

Hoffrage, U., Hertwig, R., and Gigerenzer, G. (2000). Hindsight bias: a by-product of knowledge updating? J. Exp. Psychol. Learn. Mem. Cogn. 26, 566–581. doi:10.1037/0278-7393.26.3.566

Horowitz, M. C. (2018). The promise and peril of military applications of artificial intelligence. Bulletin of the atomic scientists Retrieved from https://thebulletin.org/militaryapplications-artificial-intelligence/promise-and-peril-military-applications-artificial-intelligence (Accessed March 27, 2019).

Isaacson, J. S., and Scanziani, M. (2011). How inhibition shapes cortical activity. Neuron 72, 231–243. doi:10.1016/j.neuron.2011.09.027

Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., van Riemsdijk, M. B., and Sierhuis, M. (2014). Coactive design: designing support for interdependence in joint activity. J. Human-Robot Interaction 3 (1), 43–69. doi:10.5898/JHRI.3.1.Johnson

Kahle, W. (1979). Band 3: nervensysteme und Sinnesorgane , in Taschenatlas de anatomie. Stutttgart . Editors W. Kahle, H. Leonhardt, and W. Platzer (New York, NY, United States: Thieme Verlag ).

Kahneman, D., and Klein, G. (2009). Conditions for intuitive expertize: a failure to disagree. Am. Psychol. 64, 515–526. doi:10.1037/a0016755

Kahneman, D. (2011). Thinking, fast and slow . New York, NY, United States: Farrar, Straus and Giroux .

Katz, B., and Miledi, R. (1968). The role of calcium in neuromuscular facilitation. J. Physiol. 195, 481–492. doi:10.1113/jphysiol.1968.sp008469

Kiesler, S., and Hinds, P. (2004). Introduction to this special issue on human–robot interaction. Int J Hum-Comput. Int. 19 (1), 1–8. doi:10.1080/07370024.2004.9667337

Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., and Feltovich, P. J. (2004). Ten challenges for making automation a ‘team player’ in joint human-agent activity. IEEE Intell. Syst. 19 (6), 91–95. doi:10.1109/MIS.2004.74

Korteling, J. E. (1994). Multiple-task performance and aging . Bariet, Ruinen, Netherlands: Dissertation. TNO-Human Factors Research Institute/State University Groningen https://www.researchgate.net/publication/310626711_Multiple-Task_Performance_and_Aging .

Korteling, J. E., and Toet, A. (2020). Cognitive biases. in Encyclopedia of behavioral neuroscience . 2nd Edn (Amsterdam-Edinburgh: Elsevier Science ) doi:10.1016/B978-0-12-809324-5.24105-9

Korteling, J. E., Brouwer, A. M., and Toet, A. (2018a). A neural network framework for cognitive bias. Front. Psychol. 9, 1561. doi:10.3389/fpsyg.2018.01561

Korteling, J. E., van de Boer-Visschedijk, G. C., Boswinkel, R. A., and Boonekamp, R. C. (2018b). Effecten van de inzet van Non-Human Intelligent Collaborators op Opleiding and Training [V1719]. Report TNO 2018 R11654. Soesterberg: TNO defense safety and security , Soesterberg, Netherlands: TNO, Soesterberg .

Korteling, J. E., Gerritsma, J., and Toet, A. (2021). Retention and transfer of cognitive bias mitigation interventions: a systematic literature study. Front. Psychol. 1–20. doi:10.13140/RG.2.2.27981.56800

Kosslyn, S. M., and Koenig, O. (1992). Wet Mind: the new cognitive neuroscience . New York, NY, United States: Free Press .

Krämer, N. C., von der Pütten, A., and Eimler, S. (2012). “Human-agent and human-robot interaction theory: similarities to and differences from human-human interaction,” in Human-computer interaction: the agency perspective . Studies in computational intelligence . Editors M. Zacarias, and J. V. de Oliveira (Berlin, Germany: Springer ), 396, 215–240. doi:10.1007/978-3-642-25691-2_9

Kurzweil, R. (2005). The singularity is near . New York, NY, United States: Viking press .

Kurzweil, R. (1990). The age of intelligent machines . Cambridge, MA, United States: MIT Press .

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. (2017). Building machines that learn and think like people. Behav. Brain Sci. 40, e253. doi:10.1017/S0140525X16001837

Lichtenstein, S., and Slovic, P. (1971). Reversals of preference between bids and choices in gambling decisions. J. Exp. Psychol. 89, 46–55. doi:10.1037/h0031207

McBrearty, S., and Brooks, A. (2000). The revolution that wasn't: a new interpretation of the origin of modern human behavior. J. Hum. Evol. 39 (5), 453–563. doi:10.1006/jhev.2000.0435

McClelland, J. L. (1978). Perception and masking of wholes and parts. J. Exp. Psychol. Hum. Percept Perform. 4, 210–223. doi:10.1037//0096-1523.4.2.210

McDowd, J. M., and Craik, F. I. M. (1988). Effects of aging and task difficulty on divided attention performance. J. Exp. Psychol. Hum. Percept. Perform . 14, 267–280.

Minsky, M. (1986). The Society of Mind . London, United Kingdom: Simon and Schuster .

Moravec, H. (1988). Mind children . Cambridge, MA, United States: Harvard University Press .

Moravec, H. (1998). When will computer hardware match the human brain? J. Evol. Tech. 1Retreived from https://jetpress.org/volume1/moravec.htm .

Müller, V. C., and Bostrom, N. (2016). Future progress in artificial intelligence: a survey of expert opinion. Fundamental issues of artificial intelligence . Cham, Switzerland: Springer . doi:10.1007/978-3-319-26485-1

Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2, 175–220. doi:10.1037/1089-2680.2.2.175

Nosek, B. A., Hawkins, C. B., and Frazier, R. S. (2011). Implicit social cognition: from measures to mechanisms. Trends Cogn. Sci. 15 (4), 152–159. doi:10.1016/j.tics.2011.01.005

Patt, A., and Zeckhauser, R. (2000). Action bias and environmental decisions. J. Risk Uncertain. 21, 45–72. doi:10.1023/a:1026517309871

Peeters, M. M., van Diggelen, J., van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., et al. (2020). Hybrid collective intelligence in a human–AI society. AI and Society 38, 217–(238.) doi:10.1007/s00146-020-01005-y

Petraglia, M. D., and Korisettar, R. (1998). Early human behavior in global context . Oxfordshire, United Kingdom: Routledge .

Pomerantz, J. (1981). “Perceptual organization in information processing,” in Perceptual organization . Editors M. Kubovy, and J. Pomerantz (Hillsdale, NJ, United States: Lawrence Erlbaum ).

Pronin, E., Lin, D. Y., and Ross, L. (2002). The bias blind spot: perceptions of bias in self versus others. Personal. Soc. Psychol. Bull. 28, 369–381. doi:10.1177/0146167202286008

Reicher, G. M. (1969). Perceptual recognition as a function of meaningfulness of stimulus material. J. Exp. Psychol. 81, 274–280.

Rich, E., and Knight, K. (1991). Artificial intelligence . 2nd edition. New York, NY, United States: McGraw-Hill .

Rich, E., Knight, K., and Nair, S. B. (2009). Articial intelligence . 3rd Edn. New Delhi, India: Tata McGraw-Hill .

Risen, J. L. (2015). Believing what we do not believe: acquiescence to superstitious beliefs and other powerful intuitions. Psychol. Rev. 123, 182–207. doi:10.1037/rev0000017

Roese, N. J., and Vohs, K. D. (2012). Hindsight bias. Perspect. Psychol. Sci. 7, 411–426. doi:10.1177/1745691612454303

Rogers, R. D., and Monsell, S. (1995). Costs of a predictible switch between simple cognitive tasks. J. Exp. Psychol. Gen. 124, 207e231. doi:10.1037/0096-3445.124.2.207

Rubinstein, J. S., Meyer, D. E., and Evans, J. E. (2001). Executive control of cognitive processes in task switching. J. Exp. Psychol. Hum. Percept Perform. 27, 763–797. doi:10.1037//0096-1523.27.4.763

Russell, S., and Norvig, P. (2014). Artificial intelligence: a modern approach . 3rd ed. Harlow, United Kingdom: Pearson Education .

Shafir, E., and LeBoeuf, R. A. (2002). Rationality. Annu. Rev. Psychol. 53, 491–517. doi:10.1146/annurev.psych.53.100901.135213

Shatz, C. J. (1992). The developing brain. Sci. Am. 267, 60–67. doi:10.1038/scientificamerican0992-60

Shneiderman, B. (2020a). Design lessons from AI’s two grand goals: human emulation and useful applications. IEEE Trans. Tech. Soc. 1, 73–82. doi:10.1109/TTS.2020.2992669

Shneiderman, B. (2020b). Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Human–Computer Interaction 36 (6), 495–504. doi:10.1080/10447318.2020.1741118

Siegel, A., and Sapru, H. N. (2005). Essential neuroscience . Philedelphia, PA, United States: Lippincott Williams and Wilkins .

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature 550 (7676), 354. doi:10.1038/nature24270

Simon, H. A. (1955). A behavioral model of rational choice. Q. J. Econ. 69, 99–118. doi:10.2307/1884852

Taylor, D. M., and Doria, J. R. (1981). Self-serving and group-serving bias in attribution. J. Soc. Psychol. 113, 201–211. doi:10.1080/00224545.1981.9924371

Tegmark, M. (2017). Life 3.0: being human in the age of artificial intelligence . New York, NY, United States: Borzoi Book published by A.A. Knopf .

Toet, A., Brouwer, A. M., van den Bosch, K., and Korteling, J. E. (2016). Effects of personal characteristics on susceptibility to decision bias: a literature study. Int. J. Humanities Soc. Sci. 8, 1–17.

Tooby, J., and Cosmides, L. (2005). “Conceptual foundations of evolutionary psychology,” in Handbook of evolutionary psychology . Editor D.M. Buss (Hoboken, NJ, United States: John Wiley & Sons ), 5–67.

Tversky, A., and Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science 185 (4157), 1124–1131. doi:10.1126/science.185.4157.1124

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi:10.1126/science.7455683

Tversky, A., and Kahneman, D. (1973). Availability: a heuristic for judging frequency and probability. Cogn. Psychol. 5, 207–232. doi:10.1016/0010-0285(73)90033-9

van den Bosch, K., and Bronkhorst, K. (2018). Human-AI cooperation to benefit military decision making. Soesterberg, Netherlands: TNO.

van den Bosch, K., and Bronkhorst, K. (2019). Six challenges for human-AI Co-learning. Adaptive instructional systems 11597, 572–589. doi:10.1007/978-3-030-22341-0_45

Weisstein, N., and Harris, C. S. (1974). Visual detection of line segments: an object-superiority effect. Science 186, 752–755. doi:10.1126/science.186.4165.752

Werkhoven, P., Neerincx, M., and Kester, L. (2018). Telling autonomous systems what to do. Proceedings of the 36th European Conference on Cognitive Ergonomics, ECCE 2018 , Utrecht, Nehterlands , 5–7 September, 2018 , 1–8. doi:10.1145/3232078.3232238

Wheeler, D., (1970). Processes in word recognition Cogn. Psychol. 1, 59–85.

Williams, A., and Weisstein, N. (1978). Line segments are perceived better in a coherent context than alone: an object-line effect in visual perception. Mem. Cognit 6, 85–90. doi:10.3758/bf03197432

Wingfield, A., and Byrnes, D. (1981). The psychology of human memory . New York, NY, united States: Academic Press .

Wood, R. E., Mento, A. J., and Locke, E. A. (1987). Task complexity as a moderator of goal effects: a meta-analysis. J. Appl. Psychol. 72 (3), 416–425. doi:10.1037/0021-9010.72.3.416

Wyrobek, K. A., Berger, E. H., van der Loos, H. F. M., and Salisbury, J. K. (2008). Toward a personal robotics development platform: rationale and design of an intrinsically safe personal robot. Proceedinds of 2008 IEEE International Conference on Robotics and Automation , Pasadena, CA, United States , 19-23 May 2008 . doi:10.1109/ROBOT.2008.4543527

Keywords: human intelligence, artificial intelligence, artificial general intelligence, human-level artificial intelligence, cognitive complexity, narrow artificial intelligence, human-AI collaboration, cognitive bias

Citation: Korteling JE(, van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC and Eikelboom AR (2021) Human- versus Artificial Intelligence. Front. Artif. Intell. 4:622364. doi: 10.3389/frai.2021.622364

Received: 29 October 2020; Accepted: 01 February 2021; Published: 25 March 2021.

Reviewed by:

Copyright © 2021 Korteling, van de Boer-Visschedijk, Blankendaal, Boonekamp and Eikelboom. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: J. E. (Hans). Korteling, [email protected]

This article is part of the Research Topic

Skills-in-Demand: Bridging the Gap between Educational Attainment and Labor Market with Learning Analytics and Machine Learning Applications

chrome icon

Human vs. Artificial Intelligence

Citation Count

Autopoiesis and Cognition

On the dangers of stochastic parrots: can language models be too big 🦜, the human brain in numbers: a linearly scaled-up primate brain, deep neural networks are more accurate than humans at detecting sexual orientation from facial images., the deep learning revolution, related papers (5), trending questions (2).

Human intelligence surpasses artificial intelligence in complexity and adaptability due to neural intricacies. AI has limitations compared to human cognition, especially in neural technologies and responsible AI development.

The paper discusses the differences and limitations of AI compared to human intelligence, but it does not explicitly list the main differences between the two.

Table of Contents

What is artificial intelligence, what is human intelligence, artificial intelligence vs. human intelligence: a comparison, what brian cells can be tweaked to learn faster, artificial intelligence vs. human intelligence: what will the future of human vs ai be, impact of ai on the future of jobs, will ai replace humans, upskilling: the way forward, learn more about ai with simplilearn, ai vs human intelligence: key insights and comparisons.

Artificial Intelligence vs. Human Intelligence

From the realm of science fiction into the realm of everyday life, artificial intelligence has made significant strides. Because AI has become so pervasive in today's industries and people's daily lives, a new debate has emerged, pitting the two competing paradigms of AI and human intelligence. 

While the goal of artificial intelligence is to build and create intelligent systems that are capable of doing jobs that are analogous to those performed by humans, we can't help but question if AI is adequate on its own. This article covers a wide range of subjects, including the potential impact of AI on the future of work and the economy, how AI differs from human intelligence, and the ethical considerations that must be taken into account.

The term artificial intelligence may be used for any computer that has characteristics similar to the human brain, including the ability to think critically, make decisions, and increase productivity. The foundation of AI is human insights that may be determined in such a manner that machines can easily realize the jobs, from the most simple to the most complicated. 

Insights that are synthesized are the result of intellectual activity, including study, analysis, logic, and observation. Tasks, including robotics, control mechanisms, computer vision, scheduling, and data mining , fall under the umbrella of artificial intelligence.

The origins of human intelligence and conduct may be traced back to the individual's unique combination of genetics, upbringing, and exposure to various situations and environments. And it hinges entirely on one's freedom to shape his or her environment via the application of newly acquired information.

The information it provides is varied. For example, it may provide information on a person with a similar skill set or background, or it may reveal diplomatic information that a locator or spy was tasked with obtaining. After everything is said and done, it is able to deliver information about interpersonal relationships and the arrangement of interests.

The following is a table that compares human intelligence vs artificial intelligence:

Evolution

The cognitive abilities to think, reason, evaluate, and so on are built into human beings by their very nature.

Norbert Wiener, who hypothesized critique mechanisms, is credited with making a significant early contribution to the development of artificial intelligence (AI).

Essence

The purpose of human intelligence is to combine a range of cognitive activities in order to adapt to new circumstances.



The goal of artificial intelligence (AI) is to create computers that are able to behave like humans and complete jobs that humans would normally do.

Functionality

People make use of the memory, processing capabilities, and cognitive talents that their brains provide.

The processing of data and commands is essential to the operation of AI-powered devices.

Pace of operation

When it comes to speed, humans are no match for artificial intelligence or robots.

Computers have the ability to process far more information at a higher pace than individuals do. In the instance that the human mind can answer a mathematical problem in five minutes, artificial intelligence is capable of solving ten problems in one minute.

Learning ability

The basis of human intellect is acquired via the process of learning through a variety of experiences and situations.

Due to the fact that robots are unable to think in an abstract manner or make conclusions based on the experiences of the past. They are only capable of acquiring knowledge via exposure to material and consistent practice, although they will never create a cognitive process that is unique to humans.

Choice Making

It is possible for subjective factors that are not only based on numbers to influence the decisions that humans make.

Because it evaluates based on the entirety of the acquired facts, AI is exceptionally objective when it comes to making decisions.

Perfection

When it comes to human insights, there is almost always the possibility of "human mistake," which refers to the fact that some nuances may be overlooked at some time or another.

The fact that AI's capabilities are built on a collection of guidelines that may be updated allows it to deliver accurate results regularly.

Adjustments 

The human mind is capable of adjusting its perspectives in response to the changing conditions of its surroundings. Because of this, people are able to remember information and excel in a variety of activities.

It takes artificial intelligence a lot more time to adapt to unneeded changes.

Flexibility

The ability to exercise sound judgment is essential to multitasking, as shown by juggling a variety of jobs at once.

In the same way that a framework may learn tasks one at a time, artificial intelligence is only able to accomplish a fraction of the tasks at the same time.

Social Networking

Humans are superior to other social animals in terms of their ability to assimilate theoretical facts, their level of self-awareness, and their sensitivity to the emotions of others. This is because people are social creatures.

Artificial intelligence has not yet mastered the ability to pick up on associated social and enthusiastic indicators.

Operation

It might be described as inventive or creative.

It improves the overall performance of the system. It is impossible for it to be creative or inventive since robots cannot think in the same way that people can.

According to the findings of recent research, altering the electrical characteristics of certain cells in simulations of neural circuits caused the networks to acquire new information more quickly than in simulations with cells that were identical. They also discovered that in order for the networks to achieve the same outcomes, a smaller number of the modified cells were necessary and that the approach consumed fewer resources than models that utilized identical cells.

These results not only shed light on how human brains excel at learning but may also help us develop more advanced artificial intelligence systems, such as speech and facial recognition software for digital assistants and autonomous vehicle navigation systems.

Become a AI & Machine Learning Professional

  • $267 billion Expected Global AI Market Value By 2027
  • 37.3% Projected CAGR Of The Global AI Market From 2023-2030
  • $15.7 trillion Expected Total Contribution Of AI To The Global Economy By 2030

Artificial Intelligence Engineer

  • Industry-recognized AI Engineer Master’s certificate from Simplilearn
  • Dedicated live sessions by faculty of industry experts

Post Graduate Program in AI and Machine Learning

  • Program completion certificate from Purdue University and Simplilearn
  • Gain exposure to ChatGPT, OpenAI, Dall-E, Midjourney & other prominent tools

Here's what learners are saying regarding our programs:

Indrakala Nigam Beniwal

Indrakala Nigam Beniwal

Technical consultant , land transport authority (lta) singapore.

I completed a Master's Program in Artificial Intelligence Engineer with flying colors from Simplilearn. Thanks to the course teachers and others associated with designing such a wonderful learning experience.

Akili Yang

Personal Financial Consultant , OCBC Bank

The live sessions were quite good; you could ask questions and clear doubts. Also, the self-paced videos can be played conveniently, and any course part can be revisited. The hands-on projects were also perfect for practice; we could use the knowledge we acquired while doing the projects and apply it in real life.

The capabilities of AI are constantly expanding. It takes a significant amount of time to develop AI systems, which is something that cannot happen in the absence of human intervention. All forms of artificial intelligence, including self-driving vehicles and robotics, as well as more complex technologies like computer vision, and natural language processing , are dependent on human intellect.

1. Automation of Tasks

The most noticeable effect of AI has been the result of the digitalization and automation of formerly manual processes across a wide range of industries. These tasks, which were formerly performed manually, are now performed digitally. Currently, tasks or occupations that involve some degree of repetition or the use and interpretation of large amounts of data are communicated to and administered by a computer, and in certain cases, the intervention of humans is not required in order to complete these tasks or jobs.

2. New Opportunities

Artificial intelligence is creating new opportunities for the workforce by automating formerly human-intensive tasks . The rapid development of technology has resulted in the emergence of new fields of study and work, such as digital engineering. Therefore, although traditional manual labor jobs may go extinct, new opportunities and careers will emerge.

3. Economic Growth Model

When it's put to good use, rather than just for the sake of progress, AI has the potential to increase productivity and collaboration inside a company by opening up vast new avenues for growth. As a result, it may spur an increase in demand for goods and services, and power an economic growth model that spreads prosperity and raises standards of living.

4. Role of Work

In the era of AI, recognizing the potential of employment beyond just maintaining a standard of living is much more important. It conveys an understanding of the essential human need for involvement, co-creation, dedication, and a sense of being needed, and should therefore not be overlooked. So, sometimes, even mundane tasks at work become meaningful and advantageous, and if the task is eliminated or automated, it should be replaced with something that provides a comparable opportunity for human expression and disclosure.

5. Growth of Creativity and Innovation

Experts now have more time to focus on analyzing, delivering new and original solutions, and other operations that are firmly in the area of the human intellect, while robotics, AI, and industrial automation handle some of the mundane and physical duties formerly performed by humans.

While AI has the potential to automate specific tasks and jobs, it is likely to replace humans in some areas. AI is best suited for handling repetitive, data-driven tasks and making data-driven decisions. However, human skills such as creativity, critical thinking, emotional intelligence, and complex problem-solving still need to be more valuable and easily replicated by AI.

The future of AI is more likely to involve collaboration between humans and machines, where AI augments human capabilities and enables humans to focus on higher-level tasks that require human ingenuity and expertise. It is essential to view AI as a tool that can enhance productivity and facilitate new possibilities rather than as a complete substitute for human involvement.

Supercharge your career in Artificial Intelligence with our comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name AI Engineer Master's Program Post Graduate Program In Artificial Intelligence Post Graduate Program In Artificial Intelligence Geo All Geos All Geos IN/ROW University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 11 Months Coding Experience Required Basic Basic No Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including chatbots, NLP, Python, Keras and more. 8+ skills including Supervised & Unsupervised Learning Deep Learning Data Visualization, and more. Additional Benefits Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM Applied learning via 3 Capstone and 12 Industry-relevant Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance Upto 14 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$$ Explore Program Explore Program Explore Program

Artificial intelligence is revolutionizing every sector and pushing humanity forward to a new level. However, it is not yet feasible to achieve a precise replica of human intellect. The human cognitive process remains a mystery to scientists and experimentalists. Because of this, the common sense assumption in the growing debate between AI and human intelligence has been that AI would supplement human efforts rather than immediately replace them. Check out the Post Graduate Program in AI and Machine Learning at Simplilearn if you are interested in pursuing a career in the field of artificial intelligence. 

Our AI & Machine Learning Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees

Cohort Starts:

4 Months€ 3,000

Cohort Starts:

11 Months€ 2,990

Cohort Starts:

11 Months€ 3,990

Cohort Starts:

11 Months€ 2,290

Cohort Starts:

4 Months€ 2,490

Cohort Starts:

4 Months€ 1,999
11 Months€ 1,490

Get Free Certifications with free video courses

Machine Learning using Python

AI & Machine Learning

Machine Learning using Python

Artificial Intelligence Beginners Guide: What is AI?

Artificial Intelligence Beginners Guide: What is AI?

Learn from Industry Experts with free Masterclasses

Global Next-Gen AI Engineer Career Roadmap: Salary, Scope, Jobs, Skills

How to launch your Prompt Engineer Career in 2024?

Unlock Your Interview Potential: Master Gen AI Tools for Success in 60 Minutes

Recommended Reads

Artificial Intelligence Career Guide: A Comprehensive Playbook to Becoming an AI Expert

Data Science vs Artificial Intelligence: Key Differences

Top 18 Artificial Intelligence (AI) Applications in 2024

Introduction to Artificial Intelligence: A Beginner's Guide

What is Artificial Intelligence and Why Gain AI Certification

How Does AI Work

Get Affiliated Certifications with Live Class programs

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Artificial Versus Human Intelligence Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

With the rise of artificial intelligence (AI), it became clear that future technologies will further advance the autonomous ability of computers to generate new data. Human intelligence lies in the basis of such developments and represents the collective knowledge gained from the analysis of experiences people live through. In turn, AI is an outcome of this progression, which allows humanity to put this data in a digital form that possesses some autonomous qualities. As a result, AI also contains limitations that the human brain does not have, such as physical constrictions that put a cap on its computational capacities (Korteling et al., 2021). At the same time, people are not bound by a defined amount of operating memory in their thoughts.

It is impossible to adequately compare artificial and ‘real’ intelligence, as they do not share the same functionality on a physical level. Korteling et al. (2021) state that AI possesses “fundamentally different cognitive qualities and abilities than biological systems” (p. 1). Scientists are able to push the limits of AI further through technological progress, yet human brains can not be modified in a similar fashion. The sheer complexity of people’s cognitive abilities governs the processes that are above what computers can perform. However, AIs can work with massive amounts of data that people can not handle. The current state of AI allows many industries to apply this technology in their operations successfully. People can train AIs to excel at the analysis of a particular type of information and direct their accumulated knowledge to achieve specific goals.

In conclusion, humans’ cognitive abilities and AI differ in development potential, range of application, and many other aspects, yet they can complement each other.

Korteling, J. E., Boer-Visschedijk, G. C., Blankendaal, R. A., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human- versus artificial intelligence. Frontiers in Artificial Intelligence , 4 .

  • "Auditory Cortex Mapmaking" by Schreiner and Winer
  • The Prevention of Diabetes and Its Consequences on the Population
  • The Age of Artificial Intelligence (AI)
  • The Machine Intelligence Research Institute
  • Working With Artificial Intelligence (AI)
  • Artificial Intelligence Transforming the World
  • Automatic Systems and Artificial Intelligence in Manufacturing
  • Smart Cities Optimization With Artificial Intelligence
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, September 2). Artificial Versus Human Intelligence. https://ivypanda.com/essays/artificial-versus-human-intelligence/

"Artificial Versus Human Intelligence." IvyPanda , 2 Sept. 2023, ivypanda.com/essays/artificial-versus-human-intelligence/.

IvyPanda . (2023) 'Artificial Versus Human Intelligence'. 2 September.

IvyPanda . 2023. "Artificial Versus Human Intelligence." September 2, 2023. https://ivypanda.com/essays/artificial-versus-human-intelligence/.

1. IvyPanda . "Artificial Versus Human Intelligence." September 2, 2023. https://ivypanda.com/essays/artificial-versus-human-intelligence/.

Bibliography

IvyPanda . "Artificial Versus Human Intelligence." September 2, 2023. https://ivypanda.com/essays/artificial-versus-human-intelligence/.

  • DOI: 10.1007/s00216-019-01972-2
  • Corpus ID: 195372972

Artificial vs. human intelligence in analytics

  • G. Gauglitz
  • Published in Analytical and Bioanalytical… 25 June 2019
  • Computer Science, Philosophy

8 Citations

Evolving applications of artificial intelligence and machine learning in infectious diseases testing, abc presents recent trends in (bio)analytical chemistry, prognostic role of artificial intelligence among patients with hepatocellular cancer: a systematic review, funorder: a robust and semi-automated method for the identification of essential biosynthetic genes through computational molecular co-evolution, the metamorphosis of analytical chemistry, critical assessment of relevant methods in the field of biosensors with direct optical detection based on fibers and waveguides using plasmonic, resonance, and interference effects, fault detection and diagnosis in refrigeration systems using machine learning algorithms, algorithmic urban planning for smart and sustainable development: systematic review of the literature, related papers.

Showing 1 through 3 of 0 Related Papers

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Human Intelligence vs. Artificial Intelligence: Survey

Profile image of D Shanthi

Research in AIA neural network is an artificial representation of the human brain that tries to simulate its learning process. An artificial neural network (ANN) is often called a "Neural Network" or simply Neural Net (NN). In this paper I provide the survey which I found more interesting facts in my research. That is 1.The brief study of human brain and nervous system 2. What actually an intelligence 3.how this artificial intelligence is differing from human intellectual.

RELATED PAPERS

Lahore Garrison University Research Journal of Computer Science and Information Technology

Waqar Azeem

IRJET Journal

sakshi choudhary

IMA ACADEMY MARIJNSKAYA -publishing house

Mauro Luisetto

Zohair Jaffri

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Maryville University Online

  • Bachelor’s Degrees
  • Master’s Degrees
  • Doctorate Degrees
  • Certificate Programs
  • Nursing Degrees
  • Cybersecurity
  • Human Services
  • Science & Mathematics
  • Communication
  • Liberal Arts
  • Social Sciences
  • Computer Science
  • Admissions Overview
  • Tuition and Financial Aid
  • Incoming Freshman and Graduate Students
  • Transfer Students
  • Military Students
  • International Students
  • Early Access Program
  • About Maryville
  • Our Faculty
  • Our Approach
  • Our History
  • Accreditation
  • Tales of the Brave
  • Student Support Overview
  • Online Learning Tools
  • Infographics

Home / Blog

Artificial Intelligence vs. Human Intelligence

June 6, 2024 

artificial intelligence vs human intelligence essay pdf

As we witness the rapid evolution of technology, it’s natural to wonder about the future. Will Artificial Intelligence (AI) eventually outsmart human intelligence? Are we hurtling towards a world where machines call all the shots?

Understanding the relationship between AI and human intellect is crucial in today’s fast-paced technological world. It’s no longer just a concern for experts; it affects us all. As AI capabilities merge with our own, they’re reshaping society in various aspects, such as work, personal life, and ethics. Navigating this intertwined landscape is key to shaping our future wisely.

Defining Artificial Intelligence and human intelligence

Artificial Intelligence is the simulation of human intelligence processes by machines, especially computer systems.  It involves the development of algorithms that enable machines to perform tasks that typically require human-like intelligence, such as problem-solving, learning, perception and decision-making. Some branches of AI include:

  • Machine learning (ML) : Enables computers to learn from data and make predictions or decisions without explicit programming.
  • Deep learning (DL) : Utilizes neural networks with multiple layers to automatically learn hierarchical representations of data, especially in tasks like image and speech recognition.
  • Natural language processing (NLP) : Focuses on enabling computers to understand and generate human language, facilitating tasks like language translation and sentiment analysis.
  • Computer vision : Enables computers to interpret and understand visual information, such as object recognition and image classification.

Human intelligence (HI) encompasses a broad range of cognitive abilities that enable individuals to perceive, understand, reason and solve problems. It includes:

  • Reasoning: The ability to think logically, make inferences and draw conclusions based on available information.
  • Perception: The process of interpreting sensory information from the environment, including sight, hearing, touch, taste and smell.
  • Creativity: The capacity to generate novel ideas, solutions and artistic expressions through imagination and original thinking.
  • Emotional intelligence: The ability to recognize, understand and manage one’s own emotions, as well as perceive and empathize with the feelings of others.
  • Social intelligence: The skill of navigating social interactions, understanding social cues and adapting behavior accordingly.

What are the strengths of Artificial Intelligence?

AI learns via algorithms, which are a set of instructions that guide machines to learn independently and make decisions based on training and massive datasets. Think of AI as a supercharged brain capable of quickly processing information and learning from its experiences.

Over the years, AI has made huge leaps, changing lots of industries and how we live day-to-day. Think of virtual assistants like Siri and Alexa, self-driving cars or those spot-on recommendations you get on streaming sites—those are all examples of AI at work.

One of AI’s key strengths is its ability to tackle complex tasks with pinpoint accuracy. For example, in fields like healthcare and finance, AI-powered systems can analyze medical images , detect fraudulent activities, and even predict market trends with remarkable accuracy . This can save time and money, as well as open up space for innovation. Additional strengths of artificial intelligence include:

  • Speed and scalability: AI algorithms process vast amounts of data at incredible speeds, surpassing human cognitive capacities. This makes tasks like data analysis and spotting patterns far easier and faster for machines.
  • Consistency and reliability: AI systems perform repetitive tasks with high accuracy and consistency without fatigue or bias. From quality control to financial analysis, AI maintains unwavering precision, instilling confidence in its reliability.
  • Automation : AI automates routine tasks across industries, streamlining workflows, reducing manual labor, and enhancing efficiency. From chatbots to autonomous vehicles, AI-driven automation drives cost savings and innovation, freeing human resources to focus on the bigger picture.

What sets human intelligence apart from AI?

The things that make us uniquely human—our capacity for creativity, empathy and emotional intelligence—set human intellect apart from AI. Unlike AI, which follows set rules and algorithms, humans possess the innate ability to think critically, adapt to new situations and express complex emotions.

Human intelligence isn’t just about crunching numbers or solving puzzles; it’s about the human experience. Connecting with others, understanding their perspectives and collaborating toward common goals are all skills. Whether through art, music, literature or scientific discoveries, human intellect continues to shape the world in deep and meaningful ways. 

Human intelligence stands out in comparison to AI when it comes to:

  • Creativity and innovation: Humans possess a unique ability to think outside of the box, generate novel ideas, and adapt to new situations with creativity. This innate creativity fuels innovation, driving breakthroughs in various fields. 
  • Emotional intelligence: Human intelligence is capable of empathy, social interaction, and emotional understanding. This emotional depth helps us build connections, work together, and navigate the complexities of human relationships.
  • Adaptability and context awareness: Humans excel in quickly adapting to changing environments, drawing on our experiences, and using our understanding of context to solve problems. This flexibility lets us thrive in all kinds of situations and overcome challenges with resilience. 

Complementary roles of AI and human intelligence

Rather than viewing AI and HI as competitors, it’s more productive to see them as complementary forces. While AI excels in tasks requiring speed, precision and data analysis, human intelligence brings creativity, intuition and ethical judgment to the table. By harnessing the strengths of AI and HI, we can unlock new opportunities for innovation and progress.

For instance, in medicine, AI can assist doctors in diagnosing diseases and developing personalized treatment plans based on a patient’s genetic makeup and medical history. However, human doctors remain irreplaceable when it comes to delivering empathetic care and understanding the emotional needs of patients and their families.

Here are some other ways that Artificial Intelligence and human intelligence complement each other:

  • Collaborative problem-solving: AI provides data-driven insights, but human intelligence shines in complex and nuanced scenarios, offering fresh perspectives and innovative solutions.
  • Human-AI interaction: Effective human-AI interaction hinges on seamless communication. When AI systems are user-friendly and easily navigable, it fosters synergy between humans and technology.
  • Augmented intelligence: Augmented intelligence enhances human cognitive abilities and decision-making processes by leveraging AI. Rather than replacing human intellect, AI empowers individuals to make informed decisions, solve complex problems, and drive innovation.

By embracing the collaboration between AI and human intelligence, we unlock new potentials for problem-solving, interaction, and intelligence augmentation, leading to a future where technology enhances our lives while preserving our unique human values.

Challenges and limitations of AI vs. HI

As we further explore Artificial Intelligence vs human intelligence , we must recognize that both encounter challenges alongside their strengths. While AI holds promise for improving efficiency and enhancing our quality of life, it also introduces ethical concerns such as bias and privacy issues. Responsible AI governance is essential to address these issues effectively.

Additionally, while AI demonstrates remarkable capabilities in tasks requiring speed and precision, it falls short in areas that require complex decision-making, emotional understanding and creativity compared to human intelligence. 

In contrast, human intelligence faces its own set of challenges. Human biases and errors in decision-making can lead to flawed judgments, while the subjective nature of human cognition adds layers of complexity to understanding and addressing societal issues.

To navigate these challenges, transparency, accountability and inclusivity must be prioritized in both AI development and human decision-making processes. By integrating diverse perspectives and ethical frameworks into designing and implementing AI systems, we can ensure that AI and HI work together to serve the common good and uphold fundamental human values.

Shape human-AI interaction with Maryville University’s AI programs

As we navigate the complex landscape of AI and HI, one thing is clear: the future belongs to those who embrace the power of collaboration and innovation. By harnessing the synergies between artificial and human intelligence, we can pave the way for a brighter, more inclusive future where technology enhances our lives without overshadowing our humanity.

Explore Maryville University’s online Master of Science in Artificial Intelligence and AI certificate programs to become a leader in shaping the future of human-AI interaction in this dynamic field. These programs offer a comprehensive curriculum to equip students with the knowledge, skills, and ethical frameworks needed to navigate the complexities of AI and HI.

Recommended reading:

  • The future of Artificial Intelligence in work and everyday life
  • AI in business: ethical considerations
  • How AI advances will reshape the future workplace

Bring us your ambition and we’ll guide you along a personalized path to a quality education that’s designed to change your life.

Take Your Next Brave Step

Receive information about the benefits of our programs, the courses you'll take, and what you need to apply.

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey  on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.

About the authors

This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.

Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.

AI adoption surges

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.

Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).

Gen AI adoption is most common in the functions where it can create the most value

Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research  determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.

Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.

Investments in gen AI and analytical AI are beginning to create value

The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.

Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.

Inaccuracy: The most recognized and experienced risk of gen AI use

As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).

Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.

In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.

Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.

Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.

Bringing gen AI capabilities to bear

The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.

Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.

Gen AI high performers are excelling despite facing challenges

Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.

To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.

What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.

Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.

In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.

About the research

The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

Alex Singla and Alexander Sukharevsky  are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee  is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall  is an associate partner in the Washington, DC, office.

They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.

This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

A thumb and an index finger form a circular void, resembling the shape of a light bulb but without the glass component. Inside this empty space, a bright filament and the gleaming metal base of the light bulb are visible.

A generative AI reset: Rewiring to turn potential into value in 2024

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

IMAGES

  1. Artificial intelligence vs. human intelligence: Differences explained

    artificial intelligence vs human intelligence essay pdf

  2. Is Artificial Intelligence Better Than Human Intelligence (300 Words

    artificial intelligence vs human intelligence essay pdf

  3. Artificial Intelligence as a simulation of human intelligence Free

    artificial intelligence vs human intelligence essay pdf

  4. (PDF) Artificial Intelligence vs. Human Intelligence: The case of

    artificial intelligence vs human intelligence essay pdf

  5. Artificial Versus Human Intelligence

    artificial intelligence vs human intelligence essay pdf

  6. PPT

    artificial intelligence vs human intelligence essay pdf

VIDEO

  1. Artificial intelligence vs Human intelligence

  2. Human vs AI 🤖

  3. April 2, 2024

  4. Artificial intelligence,essay

  5. Artificial Intelligence Vs Human Intelligence. Kecerdasan buatan Vs Kecerdasan Manusia. #motivasi

  6. Artificial Intelligence Essay In English l 10 Lines On Artificial intelligence l 10 Line Essay On AI

COMMENTS

  1. (PDF) Human- versus Artificial Intelligence

    Artificial Intelligence or AI is a simulation of human intelligence applied to a computer system or other machine device so that the device has a way of thinking like humans ( J. E. Korteling et ...

  2. AI Should Augment Human Intelligence, Not Replace It

    In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to ...

  3. Defining intelligence: Bridging the gap between human and artificial

    First, Goertzel (2010); Goertzel & Yu, 2014) defined artificial intelligence as a system's ability to recognise patterns quantifiable through the observable development of actions or responses while achieving complex goals in complex environments. Goertzel's reference to the ability to recognise patterns is consistent with human intelligence ...

  4. Human and Artificial Intelligence: A Critical Comparison

    The fact is that AI can go further than humans, it could be billions of times smarter than humans at this point. 1. Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans' ability to control or even understand them. 2.

  5. "Artificial Intelligence" Vs "Human Intelligence": a New Ethics of

    This thesis refers to the introduction of artificial intelligence technologies, aimed at replacing the functions associated with human cognitive activity. It seems that society in the development of scientific and technological progress has approached a dangerous line, beyond which the uncontrolled introduction of cognitive technologies is ...

  6. [PDF] Artificial Intelligence (AI) and being human: What is the

    Computer Science, Psychology. SSRN Electronic Journal. 2020. TLDR. A critique of artificial intelligence (AI) is presented that draws a sharp distinction between narrow AI and general AI, making it unlikely that computers will displace human entrepreneurs any time soon. Expand.

  7. Human vs. Artificial Intelligence

    In this essay we compare human and artificial intelligence from two points of view: computational and neuroscience. We discuss the differences and limitations of AI with respect to our intelligence, ending with three challenging areas that are already with us: neural technologies, responsible AI, and hybrid AI systems.

  8. PDF A Critical Review of Artificial Intelligence Vs Human Intelligence

    this critical review aims to contribute to a nuanced understanding of the complex relationship between artificial intelligence and human intelligence, offering insights for policymakers, researchers, and the general public alike. Keywords: Artificial Intelligence, Emotional intelligence, Human intelligence, Reasoning, Planning 1.

  9. Frontiers

    AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human ...

  10. Human vs. Artificial Intelligence (2022)

    (DOI: 10.1109/cogmi56440.2022.00016) In this essay we compare human and artificial intelligence from two points of view: computational and neuroscience. We discuss the differences and limitations of AI with respect to our intelligence, ending with three challenging areas that are already with us: neural technologies, responsible AI, and hybrid AI systems.

  11. Human vs. Artificial Intelligence

    This essay discusses the differences and limitations of AI with respect to the authors' intelligence, ending with three challenging areas that are already with us: neural technologies, responsible AI, and hybrid AI systems. In this essay we compare human and artificial intelligence from two points of view: computational and neuroscience. We discuss the differences and limitations of AI with ...

  12. AI vs Human Intelligence 2024: A Comparative Study!

    Essence. The purpose of human intelligence is to combine a range of cognitive activities in order to adapt to new circumstances. The goal of artificial intelligence (AI) is to create computers that are able to behave like humans and complete jobs that humans would normally do. Functionality. People make use of the memory, processing ...

  13. (PDF) The Human Intelligence vs. Artificial Intelligence: Issues and

    This volume of collected papers provides a good sample of the research and practice pertaining to the conference theme. ... Vol. 8, No. 5; 2018 ISSN 1923-869X E-ISSN 1923-8703 Published by Canadian Center of Science and Education The Human Intelligence vs. Artificial Intelligence: Issues and Challenges in Computer Assisted Language Learning ...

  14. Artificial Versus Human Intelligence

    Human intelligence lies in the basis of such developments and represents the collective knowledge gained from the analysis of experiences people live through. In turn, AI is an outcome of this progression, which allows humanity to put this data in a digital form that possesses some autonomous qualities. As a result, AI also contains limitations ...

  15. (PDF) Artificial Intelligence vs. Human Intelligence: The case of

    Transhumanism and Posthumanism in Twenty-First Century Narrative brings together 15 scholars from five different countries to explore the different ways in which the posthuman has been addressed in contemporary culture and more specifically in key narratives, written in the second decade of the 21st century, by of these works engage in the premises and perils of transhumanism, while others ...

  16. [PDF] Artificial vs. human intelligence in analytics

    If this evaluator is not able to distinguish the machine from a human being, then the machine is said to be "intelligent" and to have passed the test. In 1956, John McCarthy announced a workshop at Dartmouth College on the topic "Artificial Intelligence" in the new fields of computer science, natural language processing, and neural ...

  17. (PDF) Human Intelligence vs. Artificial Intelligence: Survey

    In this paper I provide the survey which I found more interesting facts in my research. That is 1.The brief study of human brain and nervous system 2. What actually an intelligence 3.how this artificial intelligence is differing from human intellectual. The human brain is the command center for the human nervous system.

  18. AI vs. Human Intelligence

    Artificial Intelligence is the simulation of human intelligence processes by machines, especially computer systems. It involves the development of algorithms that enable machines to perform tasks that typically require human-like intelligence, such as problem-solving, learning, perception and decision-making. Some branches of AI include:

  19. (PDF) AI Superintelligence and Human Existence: A Comprehensive

    This research paper examines the concept of AI superintelligence and its potential implications for humanity's existential risk. The paper delves into the definition of superintelligence, the ...

  20. The state of AI in early 2024: Gen AI adoption spikes and starts to

    If 2023 was the year the world discovered generative AI (gen AI), 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our ...