Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

AI Should Augment Human Intelligence, Not Replace It

  • David De Cremer
  • Garry Kasparov

artificial intelligence vs human intelligence essay pdf

Artificial intelligence isn’t coming for your job, but it will be your new coworker. Here’s how to get along.

Will smart machines really replace human workers? Probably not. People and AI both bring different abilities and strengths to the table. The real question is: how can human intelligence work with artificial intelligence to produce augmented intelligence. Chess Grandmaster Garry Kasparov offers some unique insight here. After losing to IBM’s Deep Blue, he began to experiment how a computer helper changed players’ competitive advantage in high-level chess games. What he discovered was that having the best players and the best program was less a predictor of success than having a really good process. Put simply, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” As leaders look at how to incorporate AI into their organizations, they’ll have to manage expectations as AI is introduced, invest in bringing teams together and perfecting processes, and refine their own leadership abilities.

In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to the global economy by 2030 . As AI is changing how companies work, many believe that who does this work will change, too — and that organizations will begin to replace human employees with intelligent machines . This is already happening: intelligent systems are displacing humans in manufacturing, service delivery, recruitment, and the financial industry, consequently moving human workers towards lower-paid jobs or making them unemployed. This trend has led some to conclude that in 2040 our workforce may be totally unrecognizable .

  • David De Cremer is a professor of management and technology at Northeastern University and the Dunton Family Dean of its D’Amore-McKim School of Business. His website is daviddecremer.com .
  • Garry Kasparov is the chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative. He writes and speaks frequently on politics, decision-making, and human-machine collaboration. Kasparov became the youngest world chess champion in history at 22 in 1985 and retained the top rating in the world for 20 years. His famous matches against the IBM super-computer Deep Blue in 1996 and 1997 were key to bringing artificial intelligence, and chess, into the mainstream. His latest book on artificial intelligence and the future of human-plus-machine is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (2017).

Partner Center

  • Reference Manager
  • Simple TEXT file

People also looked at

Conceptual analysis article, human- versus artificial intelligence.

www.frontiersin.org

  • TNO Human Factors, Soesterberg, Netherlands

AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.

Introduction: Artificial and Human Intelligence, Worlds of Difference

Artificial general intelligence at the human level.

Recent advances in information technology and in AI may allow for more coordination and integration between of humans and technology. Therefore, quite some attention has been devoted to the development of Human-Aware AI, which aims at AI that adapts as a “team member” to the cognitive possibilities and limitations of the human team members. Also metaphors like “mate,” “partner,” “alter ego,” “Intelligent Collaborator,” “buddy” and “mutual understanding” emphasize a high degree of collaboration, similarity, and equality in “hybrid teams”. When human-aware AI partners operate like “human collaborators” they must be able to sense, understand, and react to a wide range of complex human behavioral qualities, like attention, motivation, emotion, creativity, planning, or argumentation, (e.g. Krämer et al., 2012 ; van den Bosch and Bronkhorst, 2018 ; van den Bosch et al., 2019 ). Therefore these “AI partners,” or “team mates” have to be endowed with human-like (or humanoid) cognitive abilities enabling mutual understanding and collaboration (i.e. “human awareness”).

However, no matter how intelligent and autonomous AI agents become in certain respects, at least for the foreseeable future, they probably will remain unconscious machines or special-purpose devices that support humans in specific, complex tasks. As digital machines they are equipped with a completely different operating system (digital vs biological) and with correspondingly different cognitive qualities and abilities than biological creatures, like humans and other animals ( Moravec, 1988 ; Klein et al., 2004 ; Korteling et al., 2018a ; Shneiderman, 2020a ). In general, digital reasoning- and problem-solving agents only compare very superficially to their biological counterparts, (e.g. Boden, 2017 ; Shneiderman, 2020b ). Keeping that in mind, it becomes more and more important that human professionals working with advanced AI systems, (e.g. in military‐ or policy making teams) develop a proper mental model about the different cognitive capacities of AI systems in relation to human cognition. This issue will become increasingly relevant when AI systems become more advanced and are deployed with higher degrees of autonomy. Therefore, the present paper tries to provide some more clarity and insight into the fundamental characteristics, differences and idiosyncrasies of human/biological and artificial/digital intelligences. In the final section, a global framework for constructing educational content on this “Intelligence Awareness” is introduced. This can be used for the development of education and training programs for humans who have to use or “collaborate with” advanced AI systems in the near and far future.

With the application of AI systems with increasing autonomy more and more researchers consider the necessity of vigorously addressing the real complex issues of “human-level intelligence” and more broadly artificial general intelligence , or AGI, (e.g. Goertzel et al., 2014 ). Many different definitions of A(G)I have already been proposed, (e.g. Russell and Norvig, 2014 for an overview). Many of them boil down to: technology containing or entailing (human-like) intelligence , (e.g. Kurzweil, 1990 ). This is problematic. Most definitions use the term “intelligence”, as an essential element of the definition itself, which makes the definition tautological. Second, the idea that A(G)I should be human-like seems unwarranted. At least in natural environments there are many other forms and manifestations of highly complex and intelligent behaviors that are very different from specific human cognitive abilities (see Grind, 1997 for an overview). Finally, like what is also frequently seen in the field of biology, these A(G)I definitions use human intelligence as a central basis or analogy for reasoning about the—less familiar—phenomenon of A(G)I ( Coley and Tanner, 2012 ). Because of the many differences between the underlying substrate and architecture of biological and artificial intelligence this anthropocentric way of reasoning is probably unwarranted. For these reasons we propose a (non-anthropocentric) definition of “intelligence” as: “ the capacity to realize complex goals ” ( Tegmark, 2017 ). These goals may pertain to narrow, restricted tasks (narrow AI) or to broad task domains (AGI). Building on this definition, and on a definition of AGI proposed by Bieger et al. (2014) and one of Grind (1997) , we define AGI here as: “ Non-biological capacities to autonomously and efficiently achieve complex goals in a wide range of environments”. AGI systems should be able to identify and extract the most important features for their operation and learning process automatically and efficiently over a broad range of tasks and contexts. Relevant AGI research differs from the ordinary AI research by addressing the versatility and wholeness of intelligence, and by carrying out the engineering practice according to a system comparable to the human mind in a certain sense ( Bieger et al., 2014 ).

It will be fascinating to create copies of ourselves which can learn iteratively by interaction with partners and thus become able to collaborate on the basis of common goals and mutual understanding and adaptation, (e.g. Bradshaw et al., 2012 ; Johnson et al., 2014 ). This would be very useful, for example when a high degree of social intelligence of AI will contribute to more adequate interactions with humans, for example in health care or for entertainment purposes ( Wyrobek et al., 2008 ). True collaboration on the basis of common goals and mutual understanding necessarily implies some form of humanoid general intelligence. For the time being, this remains a goal on a far-off horizon. In the present paper we argue why for most applications it also may not be very practical or necessary (and probably a bit misleading) to vigorously aim or to anticipate on systems possessing “human-like” AGI or “human-like” abilities or qualities. The fact that humans possess general intelligence does not imply that new inorganic forms of general intelligence should comply to the criteria of human intelligence. In this connection, the present paper addresses the way we think about (natural and artificial) intelligence in relation to the most probable potentials (and real upcoming issues) of AI in the short- and mid-term future. This will provide food for thought in anticipation of a future that is difficult to predict for a field as dynamic as AI.

What Is “Real Intelligence”?

Implicit in our aspiration of constructing AGI systems possessing humanoid intelligence is the premise that human (general) intelligence is the “real” form of intelligence. This is even already implicitly articulated in the term “Artificial Intelligence”, as if it were not entirely real, i.e., real like non-artificial (biological) intelligence. Indeed, as humans we know ourselves as the entities with the highest intelligence ever observed in the Universe. And as an extension of this, we like to see ourselves as rational beings who are able to solve a wide range of complex problems under all kinds of circumstances using our experience and intuition, supplemented by the rules of logic, decision analysis and statistics. It is therefore not surprising that we have some difficulty to accept the idea that we might be a bit less smart than we keep on telling ourselves, i.e., “the next insult for humanity” ( van Belkom, 2019 ). This goes as far that the rapid progress in the field of artificial intelligence is accompanied by a recurring redefinition of what should be considered “real (general) intelligence.” The conceptualization of intelligence, that is, the ability to autonomously and efficiently achieve complex goals, is then continuously adjusted and further restricted to: “those things that only humans can do.” In line with this, AI is then defined as “the study of how to make computers do things at which, at the moment, people are better” ( Rich and Knight, 1991 ; Rich et al., 2009 ). This includes thinking of creative solutions, flexibly using contextual- and background information, the use of intuition and feeling, the ability to really “think and understand,” or the inclusion of emotion in an (ethical) consideration. These are then cited as the specific elements of real intelligence, (e.g. Bergstein, 2017 ). For instance, Facebook’s director of AI and a spokesman in the field, Yann LeCun, mentioned at a Conference at MIT on the Future of Work that machines are still far from having “the essence of intelligence.” That includes the ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true. Another way of saying this is that machines don’t have common sense ( Bergstein, 2017 ), like submarines that cannot swim ( van Belkom, 2019 ). When exclusive human capacities become our pivotal navigation points on the horizon we may miss some significant problems that may need our attention first.

To make this point clear, we first will provide some insight into the basic nature of both human and artificial intelligence. This is necessary for the substantiation of an adequate awareness of intelligence ( Intelligence Awareness ), and adequate research and education anticipating the development and application of A(G)I. For the time being, this is based on three essential notions that can (and should) be further elaborated in the near future.

• With regard to cognitive tasks, we are probably less smart than we think. So why should we vigorously focus on human -like AGI?

• Many different forms of intelligence are possible and general intelligence is therefore not necessarily the same as humanoid general intelligence (or “AGI on human level”).

• AGI is often not necessary; many complex problems can also be tackled effectively using multiple narrow AI’s. 1

We Are Probably Not so Smart as We Think

How intelligent are we actually? The answer to that question is determined to a large extent by the perspective from which this issue is viewed, and thus by the measures and criteria for intelligence that is chosen. For example, we could compare the nature and capacities of human intelligence with other animal species. In that case we appear highly intelligent. Thanks to our enormous learning capacity, we have by far the most extensive arsenal of cognitive abilities 2 to autonomously solve complex problems and achieve complex objectives. This way we can solve a huge variety of arithmetic, conceptual, spatial, economic, socio-organizational, political, etc. problems. The primates—which differ only slightly from us in genetic terms—are far behind us in that respect. We can therefore legitimately qualify humans, as compared to other animal species that we know, as highly intelligent.

Limited Cognitive Capacity

However, we can also look beyond this “ relative interspecies perspective” and try to qualify our intelligence in more absolute terms, i.e., using a scale ranging from zero to what is physically possible. For example, we could view the computational capacity of a human brain as a physical system ( Bostrom, 2014 ; Tegmark, 2017 ). The prevailing notion in this respect among AI scientists is that intelligence is ultimately a matter of information and computation, and (thus) not of flesh and blood and carbon atoms. In principle, there is no physical law preventing that physical systems (consisting of quarks and atoms, like our brain) can be built with a much greater computing power and intelligence than the human brain. This would imply that there is no insurmountable physical reason why machines one day cannot become much more intelligent than ourselves in all possible respects ( Tegmark, 2017 ). Our intelligence is therefore relatively high compared to other animals, but in absolute terms it may be very limited in its physical computing capacity, albeit only by the limited size of our brain and its maximal possible number of neurons and glia cells, (e.g. Kahle, 1979 ).

To further define and assess our own (biological) intelligence, we can also discuss the evolution and nature of our biological thinking abilities. As a biological neural network of flesh and blood, necessary for survival, our brain has undergone an evolutionary optimization process of more than a billion years. In this extended period, it developed into a highly effective and efficient system for regulating essential biological functions and performing perceptive-motor and pattern-recognition tasks, such as gathering food, fighting and flighting, and mating. Almost during our entire evolution, the neural networks of our brain have been further optimized for these basic biological and perceptual motor processes that also lie at the basis of our daily practical skills, like cooking, gardening, or household jobs. Possibly because of the resulting proficiency for these kinds of tasks we may forget that these processes are characterized by extremely high computational complexity, (e.g. Moravec, 1988 ). For example, when we tie our shoelaces, many millions of signals flow in and out through a large number of different sensor systems, from tendon bodies and muscle spindles in our extremities to our retina, otolithic organs and semi-circular channels in the head, (e.g. Brodal, 1981 ). This enormous amount of information from many different perceptual-motor systems is continuously, parallel, effortless and even without conscious attention, processed in the neural networks of our brain ( Minsky, 1986 ; Moravec, 1988 ; Grind, 1997 ). In order to achieve this, the brain has a number of universal (inherent) working mechanisms, such as association and associative learning ( Shatz, 1992 ; Bar, 2007 ), potentiation and facilitation ( Katz and Miledi, 1968 ; Bao et al., 1997 ), saturation and lateral inhibition ( Isaacson and Scanziani, 2011 ; Korteling et al., 2018a ).

These kinds of basic biological and perceptual-motor capacities have been developed and set down over many millions of years. Much later in our evolution—actually only very recently—our cognitive abilities and rational functions have started to develop. These cognitive abilities, or capacities, are probably less than 100 thousand years old, which may be qualified as “embryonal” on the time scale of evolution, (e.g. Petraglia and Korisettar, 1998 ; McBrearty and Brooks, 2000 ; Henshilwood and Marean, 2003 ). In addition, this very thin layer of human achievement has necessarily been built on these “ancient” neural intelligence for essential survival functions. So, our “higher” cognitive capacities are developed from and with these (neuro) biological regulation mechanisms ( Damasio, 1994 ; Korteling and Toet, 2020 ). As a result, it should not be a surprise that the capacities of our brain for performing these recent cognitive functions are still rather limited. These limitations are manifested in many different ways, for instance:

‐The amount of cognitive information that we can consciously process (our working memory, span or attention) is very limited ( Simon, 1955 ). The capacity of our working memory is approximately 10–50 bits per second ( Tegmark, 2017 ).

‐Most cognitive tasks, like reading text or calculation, require our full attention and we usually need a lot of time to execute them. Mobile calculators can perform millions times more complex calculations than we can ( Tegmark, 2017 ).

‐Although we can process lots of information in parallel, we cannot simultaneously execute cognitive tasks that require deliberation and attention, i.e., “multi-tasking” ( Korteling, 1994 ; Rogers and Monsell, 1995 ; Rubinstein, Meyer, and Evans, 2001 ).

‐Acquired cognitive knowledge and skills of people (memory) tend to decay over time, much more than perceptual-motor skills. Because of this limited “retention” of information we easily forget substantial portions of what we have learned ( Wingfield and Byrnes, 1981 ).

Ingrained Cognitive Biases

Our limited processing capacity for cognitive tasks is not the only factor determining our cognitive intelligence. Except for an overall limited processing capacity, human cognitive information processing shows systematic distortions. These are manifested in many cognitive biases ( Tversky and Kahneman, 1973 , Tversky and Kahneman, 1974 ). Cognitive biases are systematic, universally occurring tendencies, inclinations, or dispositions that skew or distort information processes in ways that make their outcome inaccurate, suboptimal, or simply wrong, (e.g. Lichtenstein and Slovic, 1971 ; Tversky and Kahneman, 1981 ). Many biases occur in virtually the same way in many different decision situations ( Shafir and LeBoeuf, 2002 ; Kahneman, 2011 ; Toet et al., 2016 ). The literature provides descriptions and demonstrations of over 200 biases. These tendencies are largely implicit and unconscious and feel quite naturally and self/evident when we are aware of these cognitive inclinations ( Pronin et al., 2002 ; Risen, 2015 ; Korteling et al., 2018b ). That is why they are often termed “intuitive” ( Kahneman and Klein, 2009 ) or “irrational” ( Shafir and LeBoeuf, 2002 ). Biased reasoning can result in quite acceptable outcomes in natural or everyday situations, especially when the time cost of reasoning is taken into account ( Simon, 1955 ; Gigerenzer and Gaissmaier, 2011 ). However, people often deviate from rationality and/or the tenets of logic, calculation, and probability in inadvisable ways ( Tversky and Kahneman, 1974 ; Shafir and LeBoeuf, 2002 ) leading to suboptimal decisions in terms of invested time and effort (costs) given the available information and expected benefits.

Biases are largely caused by inherent (or structural) characteristics and mechanisms of the brain as a neural network ( Korteling et al., 2018a ; Korteling and Toet, 2020 ). Basically, these mechanisms—such as association, facilitation, adaptation, or lateral inhibition—result in a modification of the original or available data and its processing, (e.g. weighting its importance). For instance, lateral inhibition is a universal neural process resulting in the magnification of differences in neural activity (contrast enhancement), which is very useful for perceptual-motor functions, maintaining physical integrity and allostasis, (i.e. biological survival functions). For these functions our nervous system has been optimized for millions of years. However, “higher” cognitive functions, like conceptual thinking, probability reasoning or calculation, have been developed only very recently in evolution. These functions are probably less than 100 thousand years old, and may, therefore, be qualified as “embryonal” on the time scale of evolution, (e.g. McBrearty and Brooks, 2000 ; Henshilwood and Marean, 2003 ; Petraglia and Korisettar, 2003 ). In addition, evolution could not develop these new cognitive functions from scratch, but instead had to build this embryonal, and thin layer of human achievement from its “ancient” neural heritage for the essential biological survival functions ( Moravec, 1988 ). Since cognitive functions typically require exact calculation and proper weighting of data, data transformations—like lateral inhibition—may easily lead to systematic distortions, (i.e. biases) in cognitive information processing. Examples of the large number of biases caused by the inherent properties of biological neural networks are: Anchoring bias (biasing decisions toward previously acquired information, Furnham and Boo, 2011 ; Tversky and Kahneman, 1973 , Tversky and Kahneman, 1974 ), the Hindsight bias (the tendency to erroneously perceive events as inevitable or more likely once they have occurred, Hoffrage et al., 2000 ; Roese and Vohs, 2012 ) the Availability bias (judging the frequency, importance, or likelihood of an event by the ease with which relevant instances come to mind, Tversky and Kahnemann, 1973 ; Tversky and Kahneman, 1974 ), and the Confirmation bias (the tendency to select, interpret, and remember information in a way that confirms one’s preconceptions, views, and expectations, Nickerson, 1998 ). In addition to these inherent (structural) limitations of (biological) neural networks, biases may also originate from functional evolutionary principles promoting the survival of our ancestors who, as hunter-gatherers, lived in small, close-knit groups ( Haselton et al., 2005 ; Tooby and Cosmides, 2005 ). Cognitive biases can be caused by a mismatch between evolutionarily rationalized “heuristics” (“evolutionary rationality”: Haselton et al., 2009 ) and the current context or environment ( Tooby and Cosmides, 2005 ). In this view, the same heuristics that optimized the chances of survival of our ancestors in their (natural) environment can lead to maladaptive (biased) behavior when they are used in our current (artificial) settings. Biases that have been considered as examples of this kind of mismatch are the Action bias (preferring action even when there is no rational justification to do this, Baron and Ritov, 2004 ; Patt and Zeckhauser, 2000 ), Social proof (the tendency to mirror or copy the actions and opinions of others, Cialdini, 1984 ), the Tragedy of the commons (prioritizing personal interests over the common good of the community, Hardin, 1968 ), and the Ingroup bias (favoring one’s own group above that of others, Taylor and Doria, 1981 ).

This hard-wired (neurally inherent and/or evolutionary ingrained) character of biased thinking makes it unlikely that simple and straightforward methods like training interventions or awareness courses will be very effective to ameliorate biases. This difficulty of bias mitigation seems indeed supported by the literature ( Korteling et al., 2021 ).

General Intelligence Is Not the Same as Human-like Intelligence

Fundamental differences between biological and artificial intelligence.

We often think and deliberate about intelligence with an anthropocentric conception of our own intelligence in mind as an obvious and unambiguous reference. We tend to use this conception as a basis for reasoning about other, less familiar phenomena of intelligence, such as other forms of biological and artificial intelligence ( Coley and Tanner, 2012 ). This may lead to fascinating questions and ideas. An example is the discussion about how and when the point of “intelligence at human level” will be achieved. For instance, Ackermann. (2018) writes: “Before reaching superintelligence, general AI means that a machine will have the same cognitive capabilities as a human being”. So, researchers deliberate extensively about the point in time when we will reach general AI, (e.g., Goertzel, 2007 ; Müller and Bostrom, 2016 ). We suppose that these kinds of questions are not quite on target. There are (in principle) many different possible types of (general) intelligence conceivable of which human-like intelligence is just one of those. This means, for example that the development of AI is determined by the constraint of physics and technology, and not by those of biological evolution. So, just as the intelligence of a hypothetical extraterrestrial visitor of our planet earth is likely to have a different (in-)organic structure with different characteristics, strengths, and weaknesses, than the human residents this will also apply to artificial forms of (general) intelligence. Below we briefly summarize a few fundamental differences between human and artificial intelligence ( Bostrom, 2014 ):

‐Basic structure: Biological (carbon) intelligence is based on neural “wetware” which is fundamentally different from artificial (silicon-based) intelligence. As opposed to biological wetware, in silicon, or digital, systems “hardware” and “software” are independent of each other ( Kosslyn and Koenig, 1992 ). When a biological system has learned a new skill, this will be bounded to the system itself. In contrast, if an AI system has learned a certain skill then the constituting algorithms can be directly copied to all other similar digital systems.

‐Speed: Signals from AI systems propagate with almost the speed of light. In humans, the conduction velocity of nerves proceeds with a speed of at most 120 m/s, which is extremely slow in the time scale of computers ( Siegel and Sapru, 2005 ).

‐Connectivity and communication: People cannot directly communicate with each other. They communicate via language and gestures with limited bandwidth. This is slower and more difficult than the communication of AI systems that can be connected directly to each other. Thanks to this direct connection, they can also collaborate on the basis of integrated algorithms.

‐Updatability and scalability: AI systems have almost no constraints with regard to keep them up to date or to upscale and/or re-configure them, so that they have the right algorithms and the data processing and storage capacities necessary for the tasks they have to carry out. This capacity for rapid, structural expansion and immediate improvement hardly applies to people.

‐In contrast, biology does a lot with a little: organic brains are millions of times more efficient in energy consumption than computers. The human brain consumes less energy than a lightbulb, whereas a supercomputer with comparable computational performance uses enough electricity to power quite a village ( Fischetti, 2011 ).

These kinds of differences in basic structure, speed, connectivity, updatability, scalability, and energy consumption will necessarily also lead to different qualities and limitations between human and artificial intelligence. Our response speed to simple stimuli is, for example, many thousands of times slower than that of artificial systems. Computer systems can very easily be connected directly to each other and as such can be part of one integrated system. This means that AI systems do not have to be seen as individual entities that can easily work alongside each other or have mutual misunderstandings. And if two AI systems are engaged in a task then they run a minimal risk to make a mistake because of miscommunications (think of autonomous vehicles approaching a crossroad). After all, they are intrinsically connected parts of the same system and the same algorithm ( Gerla et al., 2014 ).

Complexity and Moravec’s Paradox

Because biological, carbon-based, brains and digital, silicon-based, computers are optimized for completely different kinds of tasks (e.g., Moravec, 1988 ; Korteling et al., 2018b ), human and artificial intelligence show fundamental and probably far-stretching differences. Because of these differences it may be very misleading to use our own mind as a basis, model or analogy for reasoning about AI. This may lead to erroneous conceptions, for example about the presumed abilities of humans and AI to perform complex tasks. Resulting flaws concerning information processing capacities emerge often in the psychological literature in which “complexity” and “difficulty” of tasks are used interchangeably (see for examples: Wood et al., 1987 ; McDowd and Craik, 1988 ). Task complexity is then assessed in an anthropocentric way, that is: by the degree to which we humans can perform or master it. So, we use the difficulty to perform or master a task as a measure of its complexity , and task performance (speed, errors) as a measure of skill and intelligence of the task performer. Although this could sometimes be acceptable in psychological research, this may be misleading if we strive for understanding the intelligence of AI systems. For us it is much more difficult to multiply two random numbers of six digits than to recognize a friend on a photograph. But when it comes to counting or arithmetic operations, computers are thousands of times faster and better, while the same systems have only recently taken steps in image recognition (which only succeeded when deep learning technology, based on some principles of biological neural networks, was developed). In general: cognitive tasks that are relatively difficult for the human brain (and which we therefore find subjectively difficult) do not have to be computationally complex, (e.g., in terms of objective arithmetic, logic, and abstract operations). And vice versa: tasks that are relatively easy for the brain (recognizing patterns, perceptual-motor tasks, well-trained tasks) do not have to be computationally simple. This phenomenon, that which is easy for the ancient, neural “technology” of people and difficult for the modern, digital technology of computers (and vice versa) has been termed the moravec’s Paradox. Hans Moravec (1988) wrote: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

Human Superior Perceptual-Motor Intelligence

Moravec’s paradox implies that biological neural networks are intelligent in different ways than artificial neural networks. Intelligence is not limited to the problems or goals that we as humans, equipped with biological intelligence, find difficult ( Grind, 1997 ). Intelligence, defined as the ability to realize complex goals or solve complex problems, is much more than that. According to Moravec (1988) high-level reasoning requires very little computation, but low-level perceptual-motor skills require enormous computational resources. If we express the complexity of a problem in terms of the number of elementary calculations needed to solve it, then our biological perceptual motor intelligence is highly superior to our cognitive intelligence. Our organic perceptual-motor intelligence is especially good at associative processing of higher-order invariants (“patterns”) in the ambient information. These are computationally more complex and contain more information than the simple, individual elements ( Gibson, 1966 , Gibson, 1979 ). An example of our superior perceptual-motor abilities is the Object Superiority Effect : we perceive and interpret whole objects faster and more effective than the (more simple) individual elements that make up these objects ( Weisstein and Harris, 1974 ; McClelland, 1978 ; Williams and Weisstein, 1978 ; Pomerantz, 1981 ). Thus, letters are also perceived more accurately when presented as part of a word than when presented in isolation, i.e. the Word superiority effect, (e.g. Reicher, 1969 ; Wheeler, 1970 ). So, the difficulty of a task does not necessarily indicate its inherent complexity . As Moravec (1988) puts it: “We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

The Supposition of Human-like AGI

So, if there would exist AI systems with general intelligence that can be used for a wide range of complex problems and objectives, those AGI machines would probably have a completely different intelligence profile, including other cognitive qualities, than humans have ( Goertzel, 2007 ). This will be even so, if we manage to construct AI agents who display similar behavior like us and if they are enabled to adapt to our way of thinking and problem-solving in order to promote human-AI teaming. Unless we decide to deliberately degrade the capabilities of AI systems (which would not be very smart), the underlying capacities and abilities of man and machines with regard to collection and processing of information, data analysis, probability reasoning, logic, memory capacity etc. will still remain dissimilar. Because of these differences we should focus at systems that effectively complement us, and that make the human-AI system stronger and more effective. Instead of pursuing human-level AI it would be more beneficial to focus on autonomous machines and (support) systems that fill in, or extend on, the manifold gaps of human cognitive intelligence. For instance, whereas people are forced—by the slowness and other limitations of biological brains—to think heuristically in terms of goals, virtues, rules and norms expressed in (fuzzy) language, AI has already established excellent capacities to process and calculate directly on highly complex data. Therefore, or the execution of specific (narrow) cognitive tasks (logical, analytical, computational), modern digital intelligence may be more effective and efficient than biological intelligence. AI may thus help to produce better answers for complex problems using high amounts of data, consistent sets of ethical principles and goals, probabilistic-, and logic reasoning, (e.g. Korteling et al., 2018b ). Therefore, we conjecture that ultimately the development of AI systems for supporting human decision making may appear the most effective way leading to the making of better choices or the development of better solutions on complex issues. So, the cooperation and division of tasks between people and AI systems will have to be primarily determinated by their mutually specific qualities. For example, tasks or task components that appeal to capacities in which AI systems excel, will have to be less (or less fully) mastered by people, so that less training will probably be required. AI systems are already much better than people at logically and arithmetically correct gathering (selecting) and processing (weighing, prioritizing, analyzing, combining) large amounts of data. They do this quickly, accurately and reliably. They are also more stable (consistent) than humans, have no stress and emotions and have a great perseverance and a much better retention of knowledge and skills than people. As a machine, they serve people completely and without any “self-interest” or “own hidden agenda.” Based on these qualities AI systems may effectively take over tasks, or task components, from people. However, it remains important that people continue to master those tasks to a certain extent, so that they can take over tasks or take adequate action if the machine system fails.

In general, people are better suited than AI systems for a much broader spectrum of cognitive and social tasks under a wide variety of (unforeseen) circumstances and events ( Korteling et al., 2018b ). People are also better at the social-psychosocial interaction for the time being. For example, it is difficult for AI systems to interpret human language and -symbolism. This requires a very extensive frame of reference, which, at least until now and for the near future, is difficult to achieve within AI. As a result of all these differences, people are still better at responding (as a flexible team) to unexpected and unpredictable situations and creatively devising possibilities and solutions in open and ill-defined tasks and across a wide range of different, and possibly unexpected, circumstances. People will have to make extra use of their specific human qualities, (i.e. what people are relatively good at) and train to improve relevant competencies. In addition, human team members will have to learn to deal well with the overall limitations of AIs. With such a proper division of tasks, capitalizing on the specific qualities and limitations of humans and AI systems, human decisional biases may be circumvented and better performance may be expected. This means that enhancement of a team with intelligent machines having less cognitive constraints and biases, may have more surplus value than striving at collaboration between humans and AI that have developed the same (human) biases. Although cooperation in teams with AI systems may need extra training in order to effectively deal with this bias-mismatch, this heterogeneity will probably be better and safer. This also opens up the possibility of a combination of high levels of meaningful human control AND high levels of automation which is likely to produce the most effective and safe human-AI systems ( Elands et al., 2019 ; Shneiderman, 2020a ). In brief: human intelligence is not the golden standard for general intelligence; instead of aiming at human-like AGI, the pursuit of AGI should thus focus on effective digital/silicon AGI in conjunction with an optimal configuration and allocation of tasks.

Explainability and Trust

Developments in relation to artificial learning, or deep (reinforcement) learning, in particular have been revolutionary. Deep learning simulates a network resembling the layered neural networks of our brain. Based on large quantities of data, the network learns to recognize patterns and links to a high level of accuracy and then connect them to courses of action without knowing the underlying causal links. This implies that it is difficult to provide deep learning AI with some kind of transparency in how or why it has made a particular choice by, for example, by expressing an intelligible reasoning (for humans) about its decision process, like we do, (e.g. Belkom, 2019 ). In addition, reasoning about decisions like humans do is a very malleable and ad hoc process (at least in humans). Humans are generally unaware of their implicit cognitions or attitudes, and therefore not be able to adequately report on them. It is therefore rather difficult for many humans to introspectively analyze their mental states, as far as these are conscious, and attach the results of this analysis to verbal labels and descriptions, (e.g. Nosek et al. (2011) . First, the human brain hardly reveals how it creates conscious thoughts, (e.g. Feldman-Barret, 2017 ). What it actually does is giving us the illusion that its products reveal its inner workings. In other words: our conscious thoughts tell us nothing about the way in which these thoughts came about. There is also no subjective marker that distinguishes correct reasoning processes from erroneous ones ( Kahneman and Klein, 2009 ). The decision maker therefore has no way to distinguish between correct thoughts, emanating from genuine knowledge and expertize, and incorrect ones following from inappropriate neuro-evolutionary processes, tendencies, and primal intuitions. So here we could ask the question: isn’t it more trustworthy to have a real black box, than to listen to a confabulating one? In addition, according to Werkhoven et al. (2018) demanding explainability observability, or transparency ( Belkom, 2019 ; van den Bosch et al., 2019 ) may cause artificial intelligent systems to constrain their potential benefit for human society, to what can be understood by humans.

Of course we should not blindly trust the results generated by AI. Like other fields of complex technology, (e.g. Modeling & Simulation), AI systems need to be verified (meeting specifications) and validated (meeting the systems’ goals) with regard to the objectives for which the system was designed. In general, when a system is properly verified and validated, it may be considered safe, secure and fit for purpose. It therefore deserves our trust for (logically) comprehensible and objective reasons (although mistakes still can happen). Likewise people trust in the performance of aero planes and cell phones despite we are almost completely ignorant about their complex inner processes. Like our own brains, artificial neural networks are fundamentally intransparant ( Nosek et al., 2011 ; Feldman-Barret, 2017 ). Therefore, trust in AI should be primarily based on its objective performance. This forms a more important base than providing trust on the basis of subjective (trickable) impressions, stories, or images aimed at belief and appeal to the user. Based on empirical validation research, developers and users can explicitly verify how well the system is doing with respect to the set of values and goals for which the machine was designed. At some point, humans may want to trust that goals can be achieved against less cost and better outcomes, when we accept solutions even if they may be less transparent for humans ( Werkhoven et al., 2018 ).

The Impact of Multiple Narrow AI Technology

Agi as the holy grail.

AGI, like human general intelligence, would have many obvious advantages, compared to narrow (limited, weak, specialized) AI. An AGI system would be much more flexible and adaptive. On the basis of generic training and reasoning processes it would understand autonomously how multiple problems in all kinds of different domains can be solved in relation to their context, (e.g. Kurzweil, 2005 ). AGI systems also require far fewer human interventions to accommodate the various loose ends among partial elements, facets, and perspectives in complex situations. AGI would really understand problems and is capable to view them from different perspectives (as people—ideally—also can do). A characteristic of the current (narrow) AI tools is that they are skilled in a very specific task, where they can often perform at superhuman levels, (e.g. Goertzel, 2007 ; Silver et al., 2017 ). These specific tasks have been well-defined and structured. Narrow AI systems are less suitable, or totally unsuitable, for tasks or task environments that offer little structure, consistency, rules or guidance, in which all sorts of unexpected, rare or uncommon events, (e.g. emergencies) may occur. Knowing and following fixed procedures usually does not lead to proper solutions in these varying circumstances. In the context of (unforeseen) changes in goals or circumstances, the adequacy of current AI is considerably reduced because it cannot reason from a general perspective and adapt accordingly ( Lake et al., 2017 ; Horowitz, 2018 ). As with narrow AI systems, people are then needed to supervise on these deviations in order to enable flexible and adaptive system performance. Therefore the quest of AGI may be considered as looking for a kind of holy grail.

Multiple Narrow AI is Most Relevant Now!

The potential high prospects of AGI, however, do not imply that AGI will be the most crucial factor in future AI R&D, at least for the short- and mid-term. When reflecting on the great potential benefits of general intelligence, we tend to consider narrow AI applications as separate entities that can very well be outperformed by a broader AGI that presumably can deal with everything. But just as our modern world has evolved rapidly through a diversity of specific (limited) technological innovations, at the system level the total and wide range of emerging AI applications will also have a groundbreaking technological and societal impact ( Peeters et al., 2020 ). This will be all the more relevant for the future world of big data, in which everything is connected to everything through the Internet of Things . So, it will be much more profitable and beneficial to develop and build (non-human-like) AI variants that will excel in areas where people are inherently limited. It seems not too far-fetched to suppose that the multiple variants of narrow AI applications also gradually get more broadly interconnected. In this way, a development toward an ever broader realm of integrated AI applications may be expected. In addition, it is already possible to train a language model AI (Generative Pre-trained Transformer3, GPT-3) with a gigantic dataset and then have it learn various tasks based on a handful of examples—one or few-shot learning. GPT-3 (developed by OpenAI) can do this with language-related tasks, but there is no reason why this should not be possible with image and sound, or with combinations of these three ( Brown, 2020 ).

Besides, the moravec Paradox implies that the development of AI “partners” with many kinds of human (-level) qualities will be very difficult to obtain, whereas their added value, (i.e. beyond the boundaries of human capabilities) will be relatively low. The most fruitful AI applications will mainly involve supplementing human constraints and limitations. Given the present incentives for competitive technological progress, multiple forms of (connected) narrow AI systems will be the major driver of AI impact on our society for short- and mid-term. For the near future, this may imply that AI applications will remain very different from, and in many aspects almost incomparable with, human agents. This is likely to be true even if the hypothetical match of artificial general intelligence (AGI) with human cognition were to be achieved in the future in the longer term. Intelligence is a multi-dimensional (quantitative, qualitative) concept. All dimensions of AI unfold and grow along their own different path with their own dynamics. Therefore, over time an increasing number of specific (narrow) AI capacities may gradually match, overtake and transcend human cognitive capacities. Given the enormous advantages of AI, for example in the field of data availability and data processing capacities, the realization of AGI probably would at the same time outclass human intelligence in many ways. Which implies that the hypothetical point of time of matching human- and artificial cognitive capacities, i.e. human-level AGI, will probably be hard to define in a meaningful way ( Goertzel, 2007 ). 3

So when AI will truly understand us as a “friend,” “partner,” “alter ego” or “buddy,” as we do when we collaborate with other humans as humans, it will surpass us in many areas at the same Moravec (1998) time. It will have a completely different profile of capacities and abilities and thus it will not be easy to really understand the way it “thinks” and comes to its decisions. In the meantime, however, as the capacities of robots expand and move from simple tools to more integrated systems, it is important to calibrate our expectations and perceptions toward robots appropriately. So, we will have to enhance our awareness and insight concerning the continuous development and progression of multiple forms of (integrated) AI systems. This concerns for example the multi-facetted nature of intelligence. Different kind of agents may have different combinations of intelligences of very different levels. An agent with general intelligence may for example be endowed with excellent abilities on the area of image recognition and navigation, calculation, and logical reasoning while at the same time being dull on the area of social interaction and goal-oriented problem solving. This awareness of the multi-dimensional nature of intelligence also concerns the way we have to deal with ( and capitalize on) anthropomorphism. That is the human tendency in human-robot interaction to characterize non-human artifacts that superficially look similar to us as possessing human-like traits, emotions, and intentions, (e.g., Kiesler and Hinds, 2004 ; Fink, 2012 ; Haring et al., 2018 ). Insight into these human factors issues is crucial to optimize the utility, performance and safety of human-AI systems ( Peeters et al., 2020 ).

From this perspective, the question whether or not “AGI at the human level” will be realized is not the most relevant question for the time being. According to most AI scientists, this will certainly happen, and the key question is not IF this will happen, but WHEN, (e.g., Müller and Bostrom, 2016 ). At a system level, however, multiple narrow AI applications are likely to overtake human intelligence in an increasingly wide range of areas.

Conclusions and Framework

The present paper focused on providing some more clarity and insight into the fundamental characteristics, differences and idiosyncrasies of human and artificial intelligences. First we presented ideas and arguments to scale up and differentiate our conception of intelligence, whether this may be human or artificial. Central to this broader, multi-faceted, conception of intelligence is the notion that intelligence in itself is a matter of information and computation, independent of its physical substrate. However, the nature of this physical substrate (biological/carbon or digital/silicon), will substantially determine its potential envelope of cognitive abilities and limitations. Organic cognitive faculties of humans have been very recently developed during the evolution of mankind. These “embryonal” faculties have been built on top of a biological neural network apparatus that has been optimized for allostasis and (complex) perceptual motor functions. Human cognition is therefore characterized by various structural limitations and distortions in its capacity to process certain forms of non-biological information. Biological neural networks are, for example, not very capable of performing arithmetic calculations, for which my pocket calculator fits millions of times better. These inherent and ingrained limitations, that are due to the biological and evolutionary origin of human intelligence, may be termed “hard-wired.”

In line with the Moravic’s paradox , we argued that intelligent behavior is more than what we, as homo sapiens, find difficult. So we should not confuse task-difficulty (subjective, anthropocentric) with task-complexity (objective). Instead we advocated a versatile conceptualization of intelligence and an acknowledgment of its many possible forms and compositions. This implies a high variety in types of biological or other forms of high (general) intelligence with a broad range of possible intelligence profiles and cognitive qualities (which may or may not surpass ours in many ways). This would make us better aware of the most probable potentials of AI applications for the short- and medium-term future. For example, from this perspective, our primary research focus should be on those components of the intelligence spectrum that are relatively difficult for the human brain and relatively easy for machines. This involves primarily the cognitive component requiring calculation, arithmetic analysis, statistics, probability calculation, data analysis, logical reasoning, memorization, et cetera.

In line with this we have advocated a modest, more humble, view of our human, general intelligence. Which also implies that human-level AGI should not be considered as the “golden standard” of intelligence (to be pursued with foremost priority). Because of the many fundamental differences between natural and artificial intelligences, human-like AGI will be very difficult to accomplish in the first place (and also with relatively limited added value). In case an AGI will be accomplished in the (far) future it will therefore probably have a completely different profile of cognitive capacities and abilities than we, as humans, have. When such an AGI has come so far that it is able to “collaborate” like a human, it will at the same time be likely that can in many respects already function at highly superior levels relative to what we are able to. For the time being, however, it will not be very realistic and useful to aim at AGI that includes the broad scope of human perceptual-motor and cognitive abilities. Instead, the most profitable AI applications for the short- and mid-term future, will probably be based on multiple narrow AI systems. These multiple narrow AI applications may catch up with human intelligence in an increasingly broader range of areas.

From this point of view we advocate not to dwell too intensively on the AGI question, whether or when AI will outsmart us, take our jobs, or how to endow it with all kinds of human abilities. Given the present state of the art it may be wise to focus more on the whole system of multiple AI innovations with humans as a crucial connecting and supervising factor. This also implies the establishment and formalization of legal boundaries and proper (effective, ethical, safe) goals for AI systems ( Elands et al., 2019 ; Aliman, 2020 ). So this human factor (legislator, user, “collaborator”) needs to have good insight into the characteristics and capacities of biological and artificial intelligence (under all sorts of tasks and working conditions). Both in the workplace and in policy making the most fruitful AI applications will be to complement and compensate for the inherent biological and cognitive constraints of humans. For this reason, prominent issues concern how to use it intelligently? For what tasks and under what conditions decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the strengths of human intelligence and how to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition. See ( Hoffman and Johnson, 2019 ; Shneiderman, 2020a ; Shneiderman, 2020b ) for recent overviews.

In summary: No matter how intelligent autonomous AI agents become in certain respects, at least for the foreseeable future, they will remain unconscious machines. These machines have a fundamentally different operating system (biological vs digital) with correspondingly different cognitive abilities and qualities than people and other animals. So, before a proper “team collaboration” can start, the human team members will have to understand these kinds of differences, i.e., how human information processing and intelligence differs from that of–the many possible and specific variants of—AI systems. Only when humans develop a proper of these “interspecies” differences they can effectively capitalize on the potential benefits of AI in (future) human-AI teams. Given the high flexibility, versatility, and adaptability of humans relative to AI systems, the first challenge becomes then how to ensure human adaptation to the more rigid abilities of AI? 4 In other words: how can we achieve a proper conception the differences between human- and artificial intelligence?

Framework for Intelligence Awareness Training

For this question, the issue of Intelligence Awareness in human professionals needs to be addressed more vigorously. Next to computer tools for the distribution of relevant awareness information ( Collazos et al., 2019 ) in human-machine systems, this requires better education and training on how to deal with the very new and different characteristics, idiosyncrasies, and capacities of AI systems. This includes, for example, a proper understanding of the basic characteristics, possibilities, and limitations of the AI’s cognitive system properties without anthropocentric and/or anthropomorphic misconceptions. In general, this “Intelligence Awareness” is highly relevant in order to better understand, investigate, and deal with the manifold possibilities and challenges of machine intelligence. This practical human-factors challenge could, for instance, be tackled by developing new, targeted and easily configurable (adaptive) training forms and learning environments for human-AI systems. These flexible training forms and environments, (e.g. simulations and games) should focus at developing knowledge, insight and practical skills concerning the specific, non-human characteristics, abilities, and limitations of AI systems and how to deal with these in practical situations. People will have to understand the critical factors determining the goals, performance, and choices of AI? Which may in some cases even include the simple notion that AIs excite as much about their performance in achieving their goals as your refrigerator does for keeping your milkshake well. They have to learn when and under what conditions decisions are safe to leave to AI and when is human judgment required or essential? And more in general: how does it “think” and decide? The relevance of this kind of knowledge, skills and practices will only become bigger when the degree of autonomy (and genericity) of advanced AI systems will grow.

What does such an Intelligence Awareness training curriculum look like? It needs to include at least a module on the cognitive characteristics of AI. This is basically a subject similar to those subjects that are also included in curricula on human cognition. This broad module on the “Cognitive Science of AI” may involve a range of sub-topics starting with a revision of the concept of "Intelligence" stripped of anthropocentric and anthropomorphic misunderstandings. In addition, this module should focus at providing knowledge about the structure and operation of the AI operating system or the “AI mind.” This may be followed by subjects like: Perception and interpretation of information by AI, AI cognition (memory, information processing, problem solving, biases), dealing with AI possibilities and limitations in the “human” areas like creativity, adaptivity, autonomy, reflection, and (self-) awareness, dealing with goal functions (valuation of actions in relation to cost-benefit), AI ethics and AI security. In addition, such a curriculum should include technical modules providing insight into the working of the AI operating system. Due to the enormous speed with which the AI technology and application develops, the content of such a curriculum is also very dynamic and continuously evolving on the basis of technological progress. This implies that the curriculum and training-aids and -environments should be flexible, experiential, and adaptive, which makes the work form of serious gaming ideally suited. Below, we provide a global framework for the development of new educational curricula on AI awareness. These subtopics go beyond learning to effectively “operate,” “control” or interact with specific AI applications (i.e. conventional human-machine interaction):

‐Understanding of underlying system characteristics of the AI (the “AI brain”). Understanding the specific qualities and limitations of AI relative to human intelligence.

‐Understanding the complexity of the tasks and of the environment from the perspective of AI systems.

‐Understanding the problem of biases in human cognition, relative to biases in AI.

‐Understanding the problems associated with the control of AI, predictability of AI behavior (decisions), building trust, maintaining situation awareness (complacency), dynamic task allocation, (e.g. taking over each other’s tasks) and responsibility (accountability).

‐How to deal with possibilities and limitations of AI in the field of “creativity”, adaptability of AI, “environmental awareness”, and generalization of knowledge.

‐Learning to deal with perceptual and cognitive limitations and possible errors of AI which may be difficult to comprehend.

‐Trust in the performance of AI (possibly in spite of limited transparency or ability to “explain”) based on verification and validation.

‐Learning to deal with our natural inclination to anthropocentrism and anthropomorphism (“theory of mind”) when reasoning about human-robot interaction.

‐How to capitalize on the powers of AI in order to deal with the inherent constraints of human information processing (and vice versa).

‐Understanding the specific characteristics and qualities of the man-machine system and being able to decide on when, for what, and how the integrated combination of human- and AI faculties may perform at best overall system potential.

In conclusion: due to the enormous speed with which the AI technology and application evolves we need a more versatile conceptualization of intelligence and an acknowledgment of its many possible forms and combinations. A revised conception of intelligence includes also a good understanding of the basic characteristics, possibilities, and limitations of different (biological, artificial) cognitive system properties without anthropocentric and/or anthropomorphic misconceptions. This “Intelligence Awareness” is highly relevant in order to better understand and deal with the manifold possibilities and challenges of machine intelligence, for instance to decide when to use or deploy AI in relation to tasks and their context. The development of educational curricula with new, targeted, and easily configurable training forms and learning environments for human-AI systems are therefore recommended. Further work should focus on training tools, methods and content that are flexible and adaptive enough to be able to keep up with the rapid changes in the field of AI and with the wide variety of target groups and learning goals.

Author Contributions

The literature search, analysis, conceptual work, and the writing of the manuscript was done by JEK. All authors listed have made substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors want to thank J. van Diggelen, L.J.H.M. Kester for their useful inputs for this manuscript. The present paper was a deliverable of 1) the BIHUNT program (Behavioral Impact of NIC Teaming, V1719) funded by the Dutch Ministry of Defense and of the Wise Policy Making program funded by the Netherlands Organization for Applied Scientific Research (TNO).

1 Narrow AI can be defined as the production of systems displaying intelligence regarding specific, highly constrained tasks, like playing chess, facial recognition, autonomous navigation, or locomotion ( Goertzel et al., 2014 ).

2 Cognitive abilities involve deliberate, conceptual or analytic thinking (e.g., calculation, statistics, analysis, reasoning, abstraction)

3 Unless of course AI will be deliberately constrained or degraded to human-level functioning.

4 Next to the issue of Human-Aware AI, i.e. tuning AI to the cognitive characteristics of humans.

Ackermann, N. (2018). Artificial Intelligence Framework: a visual introduction to machine learning and AI Retrieved from: https://towardsdatascience.com/artificial-intelligence-framework-a-visual-introduction-to-machine-learning-and-ai-d7e36b304f87 . (September 9, 2019).

Aliman, N-M. (2020). Hybrid cognitive-affective Strategies for AI safety . PhD thesis . Utrecht, Netherlands: Utrecht University . doi:10.33540/203

CrossRef Full Text

Bao, J. X., Kandel, E. R., and Hawkins, R. D. (1997). Involvement of pre- and postsynaptic mechanisms in posttetanic potentiation at Aplysia synapses. Science 275, 969–973. doi:10.1126/science.275.5302.969Dane

PubMed Abstract | CrossRef Full Text | Google Scholar

Bar, M. (2007). The proactive brain: using analogies and associations to generate predictions. Trends Cogn. Sci. 11, 280–289. doi:10.1016/j.tics.2007.05.005

Baron, J., and Ritov, I. (2004). Omission bias, individual differences, and normality. Organizational Behav. Hum. Decis. Process. 94, 74–85. doi:10.1016/j.obhdp.2004.03.003

CrossRef Full Text | Google Scholar

Belkom, R. v. (2019). Duikboten zwemmen niet: de zoektocht naar intelligente machines. Den Haag: Stichting Toekomstbeeld der Techniek (STT) .

Google Scholar

Bergstein, B. (2017). AI isn’t very smart yet. But we need to get moving to make sure automation works for more people . Cambridge, MA, United States: MIT Technology Retrieved from: https://www.technologyreview.com/s/609318/the-great-ai-paradox/

Bieger, J. B., Thorisson, K. R., and Garrett, D. (2014). “Raising AI: tutoring matters,” in 7th international conference, AGI 2014 quebec city, QC, Canada, august 1–4, 2014 proceedings . Editors B. Goertzel, L. Orseau, and J. Snaider (Berlin, Germany: Springer ). doi:10.1007/978-3-319-09274-4

Boden, M. (2017). Principles of robotics: regulating robots in the real world. Connect. Sci. 29 (2), 124–129.

Bostrom, N. (2014). Superintelligence: pathts, dangers, strategies . Oxford United Kingdom: Oxford University Press .

Bradshaw, J. M., Dignum, V., Jonker, C. M., and Sierhuis, M. (2012). Introduction to special issue on human-agent-robot teamwork. IEEE Intell. Syst. 27, 8–13. doi:10.1109/MIS.2012.37

Brodal, A. (1981). Neurological anatomy in relation to clinical medicine . New York, NY, United States: Oxford University Press .

Brown, T. B. (2020). Language models are few-shot learners, arXiv 2005, 14165v4.

Cialdini, R. D. (1984). Influence: the psychology of persuation . New York, NY, United States: Harper .

Coley, J. D., and Tanner, K. D. (2012). Common origins of diverse misconceptions: cognitive principles and the development of biology thinking. CBE Life Sci. Educ. 11 (3), 209–215. doi:10.1187/cbe.12-06-0074

Collazos, C. A., Gutierrez, F. L., Gallardo, J., Ortega, M., Fardoun, H. M., and Molina, A. I. (2019). Descriptive theory of awareness for groupware development. J. Ambient Intelligence Humanized Comput. 10, 4789–4818. doi:10.1007/s12652-018-1165-9

Damasio, A. R. (1994). Descartes’ error: emotion, reason and the human brain . New York, NY, United States: G. P. Putnam’s Sons .

Elands, P., HuizingKester, A. L., Oggero, S., and Peeters, M. (2019). Governing ethical and effective behavior of intelligent systems: a novel framework for meaningful human control in a military context. Militaire Spectator 188 (6), 302–313.

Feldman-Barret, L. (2017). How emotions are made: the secret life of the brain . Boston, MA, United States: Houghton Mifflin Harcourt .

Fink, J. (2012). “Anthropomorphism and human likeness in the design of robots and human-robot interaction,” in Social robotics. ICSR 2012 . Lecture notes in computer science . Editors S. S. Ge, O. Khatib, J. J. Cabibihan, R. Simmons, and M. A. Williams (Berlin, Germany: Springer ), 7621. doi:10.1007/978-3-642-34103-8_20

Fischetti, M. (2011). Computers vs brains. Scientific American 175 th anniversary issue Retreived from: https://www.scientificamerican.com/article/computers-vs-brains/ .

Furnham, A., and Boo, H. C. (2011). A literature review of the anchoring effect. The J. Socio-Economics 40, 35–42. doi:10.1016/j.socec.2010.10.008

Gerla, M., Lee, E-K., and Pau, G. (2014). Internet of vehicles: from intelligent grid to autonomous cars and vehicular clouds. WF-IoT 12, 241–246. doi:10.1177/1550147716665500

Gibson, J. J. (1979). The ecological approach to visual perception . Boston, MA, United States: Houghton Mifflin .

Gibson, J. J. (1966). The senses considered as perceptual systems . Boston, MA, United States: Houghton Mifflin.

Gigerenzer, G., and Gaissmaier, W. (2011). Heuristic decision making. Annu. Rev. Psychol. 62, 451–482. doi:10.1146/annurev-psych-120709-145346

Goertzel, B. (2007). Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's the singularity is near, and McDermott’s critique of Kurzweil. Artif. Intelligence 171 (18), 1161–1173. doi:10.1016/j.artint.2007.10.011

Goertzel, B., Orseau, L., and Snaider, J., (Editors). (2014). Preface. 7th international conference, AGI 2014 Quebec City, QC, Canada, August 1–4, 2014 Proceedings Springer .

Grind, W. A. van. de. (1997). Natuurlijke intelligentie: over denken, intelligentie en bewustzijn van mensen en andere dieren . 2nd edn. Amsterdam, Netherlands: Nieuwezijds Retrieved from https://www.nieuwezijds.nl/boek/natuurlijke-intelligentie/ . (July 9, 2019).

Hardin, G. (1968). The tragedy of the commons. The population problem has no technical solution; it requires a fundamental extension in morality. Science 162, 1243–1248. doi:10.1126/science.162.3859.1243

Haring, K. S., Watanabe, K., Velonaki, M., Tosell, C. C., and Finomore, V. (2018). Ffab—the form function attribution bias in human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 10 (4), 843–851. doi:10.1109/TCDS.2018.2851569

Haselton, M. G., Bryant, G. A., Wilke, A., Frederick, D. A., Galperin, A., Frankenhuis, W. E., et al. (2009). Adaptive rationality: an evolutionary perspective on cognitive bias. Soc. Cogn. 27, 733–762. doi:10.1521/soco.2009.27.5.733

Haselton, M. G., Nettle, D., and Andrews, P. W. (2005). “The evolution of cognitive bias,” in The handbook of evolutionary psychology . Editor D.M. Buss (Hoboken, NJ, United States: John Wiley & Sons ), 724–746.

Henshilwood, C., and Marean, C. (2003). The origin of modern human behavior. Curr. Anthropol. 44 (5), 627–651. doi:10.1086/377665

Hoffman, R. R., and Johnson, M. (2019). “The quest for alternatives to “levels of automation” and “task allocation,” in Human performance in automated and autonomous systems . Editors M. Mouloua, and P. A. Hancock (Boca Raton, FL, United States: CRC Press ), 43–68.

Hoffrage, U., Hertwig, R., and Gigerenzer, G. (2000). Hindsight bias: a by-product of knowledge updating? J. Exp. Psychol. Learn. Mem. Cogn. 26, 566–581. doi:10.1037/0278-7393.26.3.566

Horowitz, M. C. (2018). The promise and peril of military applications of artificial intelligence. Bulletin of the atomic scientists Retrieved from https://thebulletin.org/militaryapplications-artificial-intelligence/promise-and-peril-military-applications-artificial-intelligence (Accessed March 27, 2019).

Isaacson, J. S., and Scanziani, M. (2011). How inhibition shapes cortical activity. Neuron 72, 231–243. doi:10.1016/j.neuron.2011.09.027

Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., van Riemsdijk, M. B., and Sierhuis, M. (2014). Coactive design: designing support for interdependence in joint activity. J. Human-Robot Interaction 3 (1), 43–69. doi:10.5898/JHRI.3.1.Johnson

Kahle, W. (1979). Band 3: nervensysteme und Sinnesorgane , in Taschenatlas de anatomie. Stutttgart . Editors W. Kahle, H. Leonhardt, and W. Platzer (New York, NY, United States: Thieme Verlag ).

Kahneman, D., and Klein, G. (2009). Conditions for intuitive expertize: a failure to disagree. Am. Psychol. 64, 515–526. doi:10.1037/a0016755

Kahneman, D. (2011). Thinking, fast and slow . New York, NY, United States: Farrar, Straus and Giroux .

Katz, B., and Miledi, R. (1968). The role of calcium in neuromuscular facilitation. J. Physiol. 195, 481–492. doi:10.1113/jphysiol.1968.sp008469

Kiesler, S., and Hinds, P. (2004). Introduction to this special issue on human–robot interaction. Int J Hum-Comput. Int. 19 (1), 1–8. doi:10.1080/07370024.2004.9667337

Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., and Feltovich, P. J. (2004). Ten challenges for making automation a ‘team player’ in joint human-agent activity. IEEE Intell. Syst. 19 (6), 91–95. doi:10.1109/MIS.2004.74

Korteling, J. E. (1994). Multiple-task performance and aging . Bariet, Ruinen, Netherlands: Dissertation. TNO-Human Factors Research Institute/State University Groningen https://www.researchgate.net/publication/310626711_Multiple-Task_Performance_and_Aging .

Korteling, J. E., and Toet, A. (2020). Cognitive biases. in Encyclopedia of behavioral neuroscience . 2nd Edn (Amsterdam-Edinburgh: Elsevier Science ) doi:10.1016/B978-0-12-809324-5.24105-9

Korteling, J. E., Brouwer, A. M., and Toet, A. (2018a). A neural network framework for cognitive bias. Front. Psychol. 9, 1561. doi:10.3389/fpsyg.2018.01561

Korteling, J. E., van de Boer-Visschedijk, G. C., Boswinkel, R. A., and Boonekamp, R. C. (2018b). Effecten van de inzet van Non-Human Intelligent Collaborators op Opleiding and Training [V1719]. Report TNO 2018 R11654. Soesterberg: TNO defense safety and security , Soesterberg, Netherlands: TNO, Soesterberg .

Korteling, J. E., Gerritsma, J., and Toet, A. (2021). Retention and transfer of cognitive bias mitigation interventions: a systematic literature study. Front. Psychol. 1–20. doi:10.13140/RG.2.2.27981.56800

Kosslyn, S. M., and Koenig, O. (1992). Wet Mind: the new cognitive neuroscience . New York, NY, United States: Free Press .

Krämer, N. C., von der Pütten, A., and Eimler, S. (2012). “Human-agent and human-robot interaction theory: similarities to and differences from human-human interaction,” in Human-computer interaction: the agency perspective . Studies in computational intelligence . Editors M. Zacarias, and J. V. de Oliveira (Berlin, Germany: Springer ), 396, 215–240. doi:10.1007/978-3-642-25691-2_9

Kurzweil, R. (2005). The singularity is near . New York, NY, United States: Viking press .

Kurzweil, R. (1990). The age of intelligent machines . Cambridge, MA, United States: MIT Press .

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. (2017). Building machines that learn and think like people. Behav. Brain Sci. 40, e253. doi:10.1017/S0140525X16001837

Lichtenstein, S., and Slovic, P. (1971). Reversals of preference between bids and choices in gambling decisions. J. Exp. Psychol. 89, 46–55. doi:10.1037/h0031207

McBrearty, S., and Brooks, A. (2000). The revolution that wasn't: a new interpretation of the origin of modern human behavior. J. Hum. Evol. 39 (5), 453–563. doi:10.1006/jhev.2000.0435

McClelland, J. L. (1978). Perception and masking of wholes and parts. J. Exp. Psychol. Hum. Percept Perform. 4, 210–223. doi:10.1037//0096-1523.4.2.210

McDowd, J. M., and Craik, F. I. M. (1988). Effects of aging and task difficulty on divided attention performance. J. Exp. Psychol. Hum. Percept. Perform . 14, 267–280.

Minsky, M. (1986). The Society of Mind . London, United Kingdom: Simon and Schuster .

Moravec, H. (1988). Mind children . Cambridge, MA, United States: Harvard University Press .

Moravec, H. (1998). When will computer hardware match the human brain? J. Evol. Tech. 1Retreived from https://jetpress.org/volume1/moravec.htm .

Müller, V. C., and Bostrom, N. (2016). Future progress in artificial intelligence: a survey of expert opinion. Fundamental issues of artificial intelligence . Cham, Switzerland: Springer . doi:10.1007/978-3-319-26485-1

Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2, 175–220. doi:10.1037/1089-2680.2.2.175

Nosek, B. A., Hawkins, C. B., and Frazier, R. S. (2011). Implicit social cognition: from measures to mechanisms. Trends Cogn. Sci. 15 (4), 152–159. doi:10.1016/j.tics.2011.01.005

Patt, A., and Zeckhauser, R. (2000). Action bias and environmental decisions. J. Risk Uncertain. 21, 45–72. doi:10.1023/a:1026517309871

Peeters, M. M., van Diggelen, J., van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., et al. (2020). Hybrid collective intelligence in a human–AI society. AI and Society 38, 217–(238.) doi:10.1007/s00146-020-01005-y

Petraglia, M. D., and Korisettar, R. (1998). Early human behavior in global context . Oxfordshire, United Kingdom: Routledge .

Pomerantz, J. (1981). “Perceptual organization in information processing,” in Perceptual organization . Editors M. Kubovy, and J. Pomerantz (Hillsdale, NJ, United States: Lawrence Erlbaum ).

Pronin, E., Lin, D. Y., and Ross, L. (2002). The bias blind spot: perceptions of bias in self versus others. Personal. Soc. Psychol. Bull. 28, 369–381. doi:10.1177/0146167202286008

Reicher, G. M. (1969). Perceptual recognition as a function of meaningfulness of stimulus material. J. Exp. Psychol. 81, 274–280.

Rich, E., and Knight, K. (1991). Artificial intelligence . 2nd edition. New York, NY, United States: McGraw-Hill .

Rich, E., Knight, K., and Nair, S. B. (2009). Articial intelligence . 3rd Edn. New Delhi, India: Tata McGraw-Hill .

Risen, J. L. (2015). Believing what we do not believe: acquiescence to superstitious beliefs and other powerful intuitions. Psychol. Rev. 123, 182–207. doi:10.1037/rev0000017

Roese, N. J., and Vohs, K. D. (2012). Hindsight bias. Perspect. Psychol. Sci. 7, 411–426. doi:10.1177/1745691612454303

Rogers, R. D., and Monsell, S. (1995). Costs of a predictible switch between simple cognitive tasks. J. Exp. Psychol. Gen. 124, 207e231. doi:10.1037/0096-3445.124.2.207

Rubinstein, J. S., Meyer, D. E., and Evans, J. E. (2001). Executive control of cognitive processes in task switching. J. Exp. Psychol. Hum. Percept Perform. 27, 763–797. doi:10.1037//0096-1523.27.4.763

Russell, S., and Norvig, P. (2014). Artificial intelligence: a modern approach . 3rd ed. Harlow, United Kingdom: Pearson Education .

Shafir, E., and LeBoeuf, R. A. (2002). Rationality. Annu. Rev. Psychol. 53, 491–517. doi:10.1146/annurev.psych.53.100901.135213

Shatz, C. J. (1992). The developing brain. Sci. Am. 267, 60–67. doi:10.1038/scientificamerican0992-60

Shneiderman, B. (2020a). Design lessons from AI’s two grand goals: human emulation and useful applications. IEEE Trans. Tech. Soc. 1, 73–82. doi:10.1109/TTS.2020.2992669

Shneiderman, B. (2020b). Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Human–Computer Interaction 36 (6), 495–504. doi:10.1080/10447318.2020.1741118

Siegel, A., and Sapru, H. N. (2005). Essential neuroscience . Philedelphia, PA, United States: Lippincott Williams and Wilkins .

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature 550 (7676), 354. doi:10.1038/nature24270

Simon, H. A. (1955). A behavioral model of rational choice. Q. J. Econ. 69, 99–118. doi:10.2307/1884852

Taylor, D. M., and Doria, J. R. (1981). Self-serving and group-serving bias in attribution. J. Soc. Psychol. 113, 201–211. doi:10.1080/00224545.1981.9924371

Tegmark, M. (2017). Life 3.0: being human in the age of artificial intelligence . New York, NY, United States: Borzoi Book published by A.A. Knopf .

Toet, A., Brouwer, A. M., van den Bosch, K., and Korteling, J. E. (2016). Effects of personal characteristics on susceptibility to decision bias: a literature study. Int. J. Humanities Soc. Sci. 8, 1–17.

Tooby, J., and Cosmides, L. (2005). “Conceptual foundations of evolutionary psychology,” in Handbook of evolutionary psychology . Editor D.M. Buss (Hoboken, NJ, United States: John Wiley & Sons ), 5–67.

Tversky, A., and Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science 185 (4157), 1124–1131. doi:10.1126/science.185.4157.1124

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi:10.1126/science.7455683

Tversky, A., and Kahneman, D. (1973). Availability: a heuristic for judging frequency and probability. Cogn. Psychol. 5, 207–232. doi:10.1016/0010-0285(73)90033-9

van den Bosch, K., and Bronkhorst, K. (2018). Human-AI cooperation to benefit military decision making. Soesterberg, Netherlands: TNO.

van den Bosch, K., and Bronkhorst, K. (2019). Six challenges for human-AI Co-learning. Adaptive instructional systems 11597, 572–589. doi:10.1007/978-3-030-22341-0_45

Weisstein, N., and Harris, C. S. (1974). Visual detection of line segments: an object-superiority effect. Science 186, 752–755. doi:10.1126/science.186.4165.752

Werkhoven, P., Neerincx, M., and Kester, L. (2018). Telling autonomous systems what to do. Proceedings of the 36th European Conference on Cognitive Ergonomics, ECCE 2018 , Utrecht, Nehterlands , 5–7 September, 2018 , 1–8. doi:10.1145/3232078.3232238

Wheeler, D., (1970). Processes in word recognition Cogn. Psychol. 1, 59–85.

Williams, A., and Weisstein, N. (1978). Line segments are perceived better in a coherent context than alone: an object-line effect in visual perception. Mem. Cognit 6, 85–90. doi:10.3758/bf03197432

Wingfield, A., and Byrnes, D. (1981). The psychology of human memory . New York, NY, united States: Academic Press .

Wood, R. E., Mento, A. J., and Locke, E. A. (1987). Task complexity as a moderator of goal effects: a meta-analysis. J. Appl. Psychol. 72 (3), 416–425. doi:10.1037/0021-9010.72.3.416

Wyrobek, K. A., Berger, E. H., van der Loos, H. F. M., and Salisbury, J. K. (2008). Toward a personal robotics development platform: rationale and design of an intrinsically safe personal robot. Proceedinds of 2008 IEEE International Conference on Robotics and Automation , Pasadena, CA, United States , 19-23 May 2008 . doi:10.1109/ROBOT.2008.4543527

Keywords: human intelligence, artificial intelligence, artificial general intelligence, human-level artificial intelligence, cognitive complexity, narrow artificial intelligence, human-AI collaboration, cognitive bias

Citation: Korteling JE(, van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC and Eikelboom AR (2021) Human- versus Artificial Intelligence. Front. Artif. Intell. 4:622364. doi: 10.3389/frai.2021.622364

Received: 29 October 2020; Accepted: 01 February 2021; Published: 25 March 2021.

Reviewed by:

Copyright © 2021 Korteling, van de Boer-Visschedijk, Blankendaal, Boonekamp and Eikelboom. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: J. E. (Hans). Korteling, [email protected]

This article is part of the Research Topic

Skills-in-Demand: Bridging the Gap between Educational Attainment and Labor Market with Learning Analytics and Machine Learning Applications

Table of Contents

What is artificial intelligence, what is human intelligence, artificial intelligence vs. human intelligence: a comparison, what brian cells can be tweaked to learn faster, artificial intelligence vs. human intelligence: what will the future of human vs ai be, impact of ai on the future of jobs, will ai replace humans, upskilling: the way forward, learn more about ai with simplilearn, artificial intelligence vs. human intelligence.

Artificial Intelligence vs. Human Intelligence

From the realm of science fiction into the realm of everyday life, artificial intelligence has made significant strides. Because AI has become so pervasive in today's industries and people's daily lives, a new debate has emerged, pitting the two competing paradigms of AI and human intelligence. 

While the goal of artificial intelligence is to build and create intelligent systems that are capable of doing jobs that are analogous to those performed by humans, we can't help but question if AI is adequate on its own. This article covers a wide range of subjects, including the potential impact of AI on the future of work and the economy, how AI differs from human intelligence, and the ethical considerations that must be taken into account.

The term artificial intelligence may be used for any computer that has characteristics similar to the human brain, including the ability to think critically, make decisions, and increase productivity. The foundation of AI is human insights that may be determined in such a manner that machines can easily realize the jobs, from the most simple to the most complicated. 

Insights that are synthesized are the result of intellectual activity, including study, analysis, logic, and observation. Tasks, including robotics, control mechanisms, computer vision, scheduling, and data mining , fall under the umbrella of artificial intelligence.

The origins of human intelligence and conduct may be traced back to the individual's unique combination of genetics, upbringing, and exposure to various situations and environments. And it hinges entirely on one's freedom to shape his or her environment via the application of newly acquired information.

The information it provides is varied. For example, it may provide information on a person with a similar skill set or background, or it may reveal diplomatic information that a locator or spy was tasked with obtaining. After everything is said and done, it is able to deliver information about interpersonal relationships and the arrangement of interests.

The following is a table that compares human intelligence vs artificial intelligence:

According to the findings of recent research, altering the electrical characteristics of certain cells in simulations of neural circuits caused the networks to acquire new information more quickly than in simulations with cells that were identical. They also discovered that in order for the networks to achieve the same outcomes, a smaller number of the modified cells were necessary and that the approach consumed fewer resources than models that utilized identical cells.

These results not only shed light on how human brains excel at learning but may also help us develop more advanced artificial intelligence systems, such as speech and facial recognition software for digital assistants and autonomous vehicle navigation systems.

Become a AI & Machine Learning Professional

  • $267 billion Expected Global AI Market Value By 2027
  • 37.3% Projected CAGR Of The Global AI Market From 2023-2030
  • $15.7 trillion Expected Total Contribution Of AI To The Global Economy By 2030

Artificial Intelligence Engineer

  • Add the IBM Advantage to your Learning
  • Generative AI Edge

Post Graduate Program in AI and Machine Learning

  • Program completion certificate from Purdue University and Simplilearn
  • Gain exposure to ChatGPT, OpenAI, Dall-E, Midjourney & other prominent tools

Here's what learners are saying regarding our programs:

Indrakala Nigam Beniwal

Indrakala Nigam Beniwal

Technical consultant , land transport authority (lta) singapore.

I completed a Master's Program in Artificial Intelligence Engineer with flying colors from Simplilearn. Thanks to the course teachers and others associated with designing such a wonderful learning experience.

Akili Yang

Personal Financial Consultant , OCBC Bank

The live sessions were quite good; you could ask questions and clear doubts. Also, the self-paced videos can be played conveniently, and any course part can be revisited. The hands-on projects were also perfect for practice; we could use the knowledge we acquired while doing the projects and apply it in real life.

The capabilities of AI are constantly expanding. It takes a significant amount of time to develop AI systems, which is something that cannot happen in the absence of human intervention. All forms of artificial intelligence, including self-driving vehicles and robotics, as well as more complex technologies like computer vision, and natural language processing , are dependent on human intellect.

1. Automation of Tasks

The most noticeable effect of AI has been the result of the digitalization and automation of formerly manual processes across a wide range of industries. These tasks, which were formerly performed manually, are now performed digitally. Currently, tasks or occupations that involve some degree of repetition or the use and interpretation of large amounts of data are communicated to and administered by a computer, and in certain cases, the intervention of humans is not required in order to complete these tasks or jobs.

2. New Opportunities

Artificial intelligence is creating new opportunities for the workforce by automating formerly human-intensive tasks . The rapid development of technology has resulted in the emergence of new fields of study and work, such as digital engineering. Therefore, although traditional manual labor jobs may go extinct, new opportunities and careers will emerge.

3. Economic Growth Model

When it's put to good use, rather than just for the sake of progress, AI has the potential to increase productivity and collaboration inside a company by opening up vast new avenues for growth. As a result, it may spur an increase in demand for goods and services, and power an economic growth model that spreads prosperity and raises standards of living.

4. Role of Work

In the era of AI, recognizing the potential of employment beyond just maintaining a standard of living is much more important. It conveys an understanding of the essential human need for involvement, co-creation, dedication, and a sense of being needed, and should therefore not be overlooked. So, sometimes, even mundane tasks at work become meaningful and advantageous, and if the task is eliminated or automated, it should be replaced with something that provides a comparable opportunity for human expression and disclosure.

5. Growth of Creativity and Innovation

Experts now have more time to focus on analyzing, delivering new and original solutions, and other operations that are firmly in the area of the human intellect, while robotics, AI, and industrial automation handle some of the mundane and physical duties formerly performed by humans.

While AI has the potential to automate specific tasks and jobs, it is likely to replace humans in some areas. AI is best suited for handling repetitive, data-driven tasks and making data-driven decisions. However, human skills such as creativity, critical thinking, emotional intelligence, and complex problem-solving still need to be more valuable and easily replicated by AI.

The future of AI is more likely to involve collaboration between humans and machines, where AI augments human capabilities and enables humans to focus on higher-level tasks that require human ingenuity and expertise. It is essential to view AI as a tool that can enhance productivity and facilitate new possibilities rather than as a complete substitute for human involvement.

Supercharge your career in Artificial Intelligence with our comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name AI Engineer Master's Program Post Graduate Program In Artificial Intelligence Post Graduate Program In Artificial Intelligence Geo All Geos All Geos IN/ROW University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 11 Months Coding Experience Required Basic Basic No Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including chatbots, NLP, Python, Keras and more. 8+ skills including Supervised & Unsupervised Learning Deep Learning Data Visualization, and more. Additional Benefits Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM Applied learning via 3 Capstone and 12 Industry-relevant Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance Upto 14 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$$ Explore Program Explore Program Explore Program

Artificial intelligence is revolutionizing every sector and pushing humanity forward to a new level. However, it is not yet feasible to achieve a precise replica of human intellect. The human cognitive process remains a mystery to scientists and experimentalists. Because of this, the common sense assumption in the growing debate between AI and human intelligence has been that AI would supplement human efforts rather than immediately replace them. Check out the Post Graduate Program in AI and Machine Learning at Simplilearn if you are interested in pursuing a career in the field of artificial intelligence. 

Our AI & Machine Learning Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Get Free Certifications with free video courses

Machine Learning using Python

AI & Machine Learning

Machine Learning using Python

Artificial Intelligence Beginners Guide: What is AI?

Artificial Intelligence Beginners Guide: What is AI?

Learn from Industry Experts with free Masterclasses

Career Masterclass: How to Build the Best Fantasy League Team Using Gen AI Tools

Unlock Your Career Potential: Land Your Dream Job with Gen AI Tools

Gain Gen AI expertise in Purdue's Applied Gen AI Specialization

Recommended Reads

Artificial Intelligence Career Guide: A Comprehensive Playbook to Becoming an AI Expert

Data Science vs Artificial Intelligence: Key Differences

Top 18 Artificial Intelligence (AI) Applications in 2024

Introduction to Artificial Intelligence: A Beginner's Guide

What is Artificial Intelligence and Why Gain AI Certification

How Does AI Work

Get Affiliated Certifications with Live Class programs

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

The Human Intelligence vs Artificial Intelligence

Profile image of Mohammed A Ali

2018, International Journal of English Linguistics

In this study, the researcher has advocated the importance of human intelligence in language learning since software or any Learning Management System (LMS) cannot be programmed to understand the human context as well as all the linguistic structures contextually. This study examined the extent to which language learning is perilous to machine learning and its programs such as Artificial Intelligence (AI), Pattern Recognition, and Image Analysis used in much assistive learning techniques such as voice detection, face detection and recognition, personalized assistants, besides language learning programs. The researcher argues that language learning is closely associated with human intelligence, human neural networks and no computers or software can claim to replace or replicate those functions of human brain. This study thus posed a challenge to natural language processing (NLP) techniques that claimed having taught a computer how to understand the way humans learn, to understand text without any clue or calculation, to realize the ambiguity in human languages in terms of the juxtaposition between the context and the meaning, and also to automate the language learning process between computers and humans. The study cites evidence of deficiencies in such machine learning software and gadgets to prove that in spite of all technological advancements there remain areas of human brain and human intelligence where a computer or its software cannot enter. These deficiencies highlight the limitations of AI and super intelligence systems of machines to prove that human intelligence would always remain superior.

Related Papers

Mihai Nadin

artificial intelligence vs human intelligence essay pdf

Roots Reloaded Culture, Identity and Social Development in the Digital Age

Dr Tatiana J Pentes , Dr Ayman Kole

This edited volume is designed to explore different perspectives of culture, identity, and social development using the impact of the digital age as a common thread, aiming at interdisciplinary audiences. Cases of communities and individuals using new technology as a tool to preserve and explore their cultural heritage alongside new media as a source for social orientation ranging from language acquisition to health-related issues will be covered. Therefore, aspects such as Art and Cultural Studies, Media and Communication, Behavioral Science, Psychology, Philosophy, and innovative approaches used by creative individuals are included. From the Aboriginal tribes of Australia to the Maoris of New Zealand, to the mystical teachings of Sufi brotherhoods, the significance of the oral and written traditions and their current relation to online activities shall be discussed in the opening article. The book continues with a closer look at obesity awareness support groups and their impact on social media, Facebook usage in a language learning context, smartphone addiction and internet dependency, as well as online media reporting of controversial ethical issues. Digital progress has already left its dominating mark as the world entered the 21st century. Without a doubt, as technology continues its ascent, society will be faced with new and altering values in an effort to catch up with this extraordinary Digitization, adapt satisfactorily in order to utilize these strong developments in everyday life

Advances in Haptics

Anne Mangen

Fernando Alonso-Fernandez

Jordi Vallverdú

During the previous stage of our research we developed a computer simulation (called ‘The Panic Room’ or, more simply, ‘TPR’) dealing with synthetic emotions. :We were developing the first steps towards an evolutionary machine, defining the key elements involved in the development of complex actions (that is, creating a physical intuitive ontology, from a bottom-up approach). After the successful initial results of TPR, we considered that it would be necessary to develop a new simulation (which we will call “TPR 2.0.”), more complex and with better visualisation characteristics. After this, we created a simulation on emotions evolution with genetic algorithms (Game Of Emotions, GOE) which results on the value of specific emotions into social domains were applied to HRI real robotic environments at Nishidalab (Japan), focused into the notions of empathy and proxemics. There we performed an experiment that involved humans from two different native-speaking cultures and one robot introduced as three different machines. The final HRI obtained data was analyzed under several research field perspectives: psychology, philosophy, robotic sciences and anthropology.     

Human-Computer Interaction

Sebastiano Bagnara

Kevin Landry

Process Syllabus Design negotiated with learners is rarely feasible in any established language...

papers.ssrn.com

carlo giupponi , stefano balbi

RELATED PAPERS

Warren J. Blumenfeld

M. Şahin Bülbül

Dr. Kalpeshkumar L Gupta

International Journal of Agent Technologies and Systems

stefano balbi

2012 IEEE 4th International Conference on Adaptive Science & Technology (ICAST)

Godfrey Mills

Manuel Ahedo

Hani Mahdi , Hoda Mohamed , Sally Sameh Attia

Online Society in China: Creating, Celebrating, …

Matteo Tarantino

B Preedip Balaji

Norshuhani Zamin

Procedia Computer Science

Hernan Diaz

… : Tools for Managing Day-To-Day …

Nafisat Afolake Adedokun-Shittu

Giuseppe Ricci

Brain Research

Emanuel Diamant

2009 2nd International Conference on Adaptive Science & Technology (ICAST)

Anthony Yiu

Henna Lahti

S. Susan Marandi

ERP, Supply Chain and E-Commerce Management Solutions

Malleswara Talla

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial

Difference Between Artificial Intelligence and Human Intelligence

  • Difference Between Artificial Intelligence and Business Intelligence
  • Difference between Artificial Intelligence and Automation
  • Artificial Intelligence | An Introduction
  • Artificial Intelligence vs Cognitive Computing
  • How Artificial Intelligence (AI) is Revolutionizing the eCommerce Industry in 2020?
  • Artificial Intelligence Permeation and Application
  • Artificial Intelligence - Terminology
  • What is Artificial Intelligence?
  • Difference between AI and Expert System
  • Difference Between Artificial Intelligence and Software Development
  • 8 Best Topics for Research and Thesis in Artificial Intelligence
  • What is Artificial General Intelligence (AGI)?
  • Difference between Semantic Web and AI
  • Advantages and Disadvantage of Artificial Intelligence
  • 10 Most Demanded Job Roles in Artificial Intelligence
  • Artificial Intelligence - Boon or Bane
  • 8 Best Artificial Intelligence Books For Beginners in 2024
  • How Google is Using Artificial Intelligence?
  • What is Artificial Narrow Intelligence (ANI)?
  • Removing stop words with NLTK in Python
  • Decision Tree
  • Linear Regression in Machine learning
  • Agents in Artificial Intelligence
  • Plotting Histogram in Python using Matplotlib
  • One Hot Encoding in Machine Learning
  • Best Python libraries for Machine Learning
  • Introduction to Hill Climbing | Artificial Intelligence
  • Clustering in Machine Learning
  • Digital Image Processing Basics

Artificial Intelligence:  

Artificial Intelligence is based on human insights that can be decided in a way that can machine can effortlessly actualize the tasks, from the basic to those that are indeed more complex. The reason for manufactured insights is learning, problem-solving, reasoning, and perception. 

This term may be connected to any machines which show related to a human intellect such as examination and decision-making and increments the efficiency. 

AI covers assignments like robotics, control systems, face recognition, scheduling, data mining, and numerous others. 

Advantages of Artificial Intelligence (AI):

  • AI can process vast amounts of data much faster than humans.
  • AI can work around the clock without needing breaks or rest.
  • AI can perform tasks that are too dangerous or difficult for humans.

Disadvantages of Artificial Intelligence (AI):

  • AI lacks the creativity and intuition that humans possess.
  • AI is limited by its programming and may not be able to adapt to new or unexpected situations.
  • AI may make errors if not programmed and trained properly.

Human Intelligence:  

Human intelligence or the behavior of the human being has come from past experiences and the doings based upon situation, and environment. And it is completely based upon the ability to change his/her surroundings through knowledge which we gained. 

It gives diverse sorts of information. It can provide data on things related to a particular aptitude and knowledge, which can be another human subject, or, within the case of locators and spies, diplomatic data which they had to get to. So, after concluding all it can give data on interpersonal connections and arrange of interest. 

Advantages of Human Intelligence (HI):

  • HI has creativity, intuition, and emotional intelligence that AI lacks.
  • HI can adapt to new and unexpected situations.
  • HI can provide ethical and moral considerations in decision-making.

Disadvantages of Human Intelligence (HI):

  • HI is limited by its physical and mental capabilities.
  • HI is prone to biases and may make errors or poor decisions.
  • HI requires rest and breaks, which can slow down processes.

Similarities between Artificial Intelligence (AI) and Human Intelligence (HI):

  • Both AI and HI can learn and improve over time.
  • Both AI and HI can be used to solve complex problems and make decisions.
  • Both AI and HI can process and interpret information from the world around them.

Below is a table of differences between Artificial intelligence and Human intelligence: 

Conclusion:

Business Intelligence and Data Mining are two different approaches to analyzing data that are used for different purposes. BI is used for monitoring and improving business operations, while Data Mining is used for discovering patterns and relationships in data to predict future outcomes. Organizations should choose the approach that best suits their needs and goals.

Please Login to comment...

Similar reads.

  • Difference Between
  • Write From Home

advertisewithusBannerImg

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Our approach

  • Responsibility
  • Infrastructure
  • Try Meta AI

RECOMMENDED READS

  • 5 Steps to Getting Started with Llama 2
  • The Llama Ecosystem: Past, Present, and Future
  • Introducing Code Llama, a state-of-the-art large language model for coding
  • Meta and Microsoft Introduce the Next Generation of Llama
  • Today, we’re introducing Meta Llama 3, the next generation of our state-of-the-art open source large language model.
  • Llama 3 models will soon be available on AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake, and with support from hardware platforms offered by AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm.
  • We’re dedicated to developing Llama 3 in a responsible way, and we’re offering various resources to help others use it responsibly as well. This includes introducing new trust and safety tools with Llama Guard 2, Code Shield, and CyberSec Eval 2.
  • In the coming months, we expect to introduce new capabilities, longer context windows, additional model sizes, and enhanced performance, and we’ll share the Llama 3 research paper.
  • Meta AI, built with Llama 3 technology, is now one of the world’s leading AI assistants that can boost your intelligence and lighten your load—helping you learn, get things done, create content, and connect to make the most out of every moment. You can try Meta AI here .

Today, we’re excited to share the first two models of the next generation of Llama, Meta Llama 3, available for broad use. This release features pretrained and instruction-fine-tuned language models with 8B and 70B parameters that can support a broad range of use cases. This next generation of Llama demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning. We believe these are the best open source models of their class, period. In support of our longstanding open approach, we’re putting Llama 3 in the hands of the community. We want to kickstart the next wave of innovation in AI across the stack—from applications to developer tools to evals to inference optimizations and more. We can’t wait to see what you build and look forward to your feedback.

Our goals for Llama 3

With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development. The text-based models we are releasing today are the first in the Llama 3 collection of models. Our goal in the near future is to make Llama 3 multilingual and multimodal, have longer context, and continue to improve overall performance across core LLM capabilities such as reasoning and coding.

State-of-the-art performance

Our new 8B and 70B parameter Llama 3 models are a major leap over Llama 2 and establish a new state-of-the-art for LLM models at those scales. Thanks to improvements in pretraining and post-training, our pretrained and instruction-fine-tuned models are the best models existing today at the 8B and 70B parameter scale. Improvements in our post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses. We also saw greatly improved capabilities like reasoning, code generation, and instruction following making Llama 3 more steerable.

artificial intelligence vs human intelligence essay pdf

*Please see evaluation details for setting and parameters with which these evaluations are calculated.

In the development of Llama 3, we looked at model performance on standard benchmarks and also sought to optimize for performance for real-world scenarios. To this end, we developed a new high-quality human evaluation set. This evaluation set contains 1,800 prompts that cover 12 key use cases: asking for advice, brainstorming, classification, closed question answering, coding, creative writing, extraction, inhabiting a character/persona, open question answering, reasoning, rewriting, and summarization. To prevent accidental overfitting of our models on this evaluation set, even our own modeling teams do not have access to it. The chart below shows aggregated results of our human evaluations across of these categories and prompts against Claude Sonnet, Mistral Medium, and GPT-3.5.

artificial intelligence vs human intelligence essay pdf

Preference rankings by human annotators based on this evaluation set highlight the strong performance of our 70B instruction-following model compared to competing models of comparable size in real-world scenarios.

Our pretrained model also establishes a new state-of-the-art for LLM models at those scales.

artificial intelligence vs human intelligence essay pdf

To develop a great language model, we believe it’s important to innovate, scale, and optimize for simplicity. We adopted this design philosophy throughout the Llama 3 project with a focus on four key ingredients: the model architecture, the pretraining data, scaling up pretraining, and instruction fine-tuning.

Model architecture

In line with our design philosophy, we opted for a relatively standard decoder-only transformer architecture in Llama 3. Compared to Llama 2, we made several key improvements. Llama 3 uses a tokenizer with a vocabulary of 128K tokens that encodes language much more efficiently, which leads to substantially improved model performance. To improve the inference efficiency of Llama 3 models, we’ve adopted grouped query attention (GQA) across both the 8B and 70B sizes. We trained the models on sequences of 8,192 tokens, using a mask to ensure self-attention does not cross document boundaries.

Training data

To train the best language model, the curation of a large, high-quality training dataset is paramount. In line with our design principles, we invested heavily in pretraining data. Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources. Our training dataset is seven times larger than that used for Llama 2, and it includes four times more code. To prepare for upcoming multilingual use cases, over 5% of the Llama 3 pretraining dataset consists of high-quality non-English data that covers over 30 languages. However, we do not expect the same level of performance in these languages as in English.

To ensure Llama 3 is trained on data of the highest quality, we developed a series of data-filtering pipelines. These pipelines include using heuristic filters, NSFW filters, semantic deduplication approaches, and text classifiers to predict data quality. We found that previous generations of Llama are surprisingly good at identifying high-quality data, hence we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3.

We also performed extensive experiments to evaluate the best ways of mixing data from different sources in our final pretraining dataset. These experiments enabled us to select a data mix that ensures that Llama 3 performs well across use cases including trivia questions, STEM, coding, historical knowledge, etc.

Scaling up pretraining

To effectively leverage our pretraining data in Llama 3 models, we put substantial effort into scaling up pretraining. Specifically, we have developed a series of detailed scaling laws for downstream benchmark evaluations. These scaling laws enable us to select an optimal data mix and to make informed decisions on how to best use our training compute. Importantly, scaling laws allow us to predict the performance of our largest models on key tasks (for example, code generation as evaluated on the HumanEval benchmark—see above) before we actually train the models. This helps us ensure strong performance of our final models across a variety of use cases and capabilities.

We made several new observations on scaling behavior during the development of Llama 3. For example, while the Chinchilla-optimal amount of training compute for an 8B parameter model corresponds to ~200B tokens, we found that model performance continues to improve even after the model is trained on two orders of magnitude more data. Both our 8B and 70B parameter models continued to improve log-linearly after we trained them on up to 15T tokens. Larger models can match the performance of these smaller models with less training compute, but smaller models are generally preferred because they are much more efficient during inference.

To train our largest Llama 3 models, we combined three types of parallelization: data parallelization, model parallelization, and pipeline parallelization. Our most efficient implementation achieves a compute utilization of over 400 TFLOPS per GPU when trained on 16K GPUs simultaneously. We performed training runs on two custom-built 24K GPU clusters . To maximize GPU uptime, we developed an advanced new training stack that automates error detection, handling, and maintenance. We also greatly improved our hardware reliability and detection mechanisms for silent data corruption, and we developed new scalable storage systems that reduce overheads of checkpointing and rollback. Those improvements resulted in an overall effective training time of more than 95%. Combined, these improvements increased the efficiency of Llama 3 training by ~three times compared to Llama 2.

Instruction fine-tuning

To fully unlock the potential of our pretrained models in chat use cases, we innovated on our approach to instruction-tuning as well. Our approach to post-training is a combination of supervised fine-tuning (SFT), rejection sampling, proximal policy optimization (PPO), and direct preference optimization (DPO). The quality of the prompts that are used in SFT and the preference rankings that are used in PPO and DPO has an outsized influence on the performance of aligned models. Some of our biggest improvements in model quality came from carefully curating this data and performing multiple rounds of quality assurance on annotations provided by human annotators.

Learning from preference rankings via PPO and DPO also greatly improved the performance of Llama 3 on reasoning and coding tasks. We found that if you ask a model a reasoning question that it struggles to answer, the model will sometimes produce the right reasoning trace: The model knows how to produce the right answer, but it does not know how to select it. Training on preference rankings enables the model to learn how to select it.

Building with Llama 3

Our vision is to enable developers to customize Llama 3 to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem. With this release, we’re providing new trust and safety tools including updated components with both Llama Guard 2 and Cybersec Eval 2, and the introduction of Code Shield—an inference time guardrail for filtering insecure code produced by LLMs.

We’ve also co-developed Llama 3 with torchtune , the new PyTorch-native library for easily authoring, fine-tuning, and experimenting with LLMs. torchtune provides memory efficient and hackable training recipes written entirely in PyTorch. The library is integrated with popular platforms such as Hugging Face, Weights & Biases, and EleutherAI and even supports Executorch for enabling efficient inference to be run on a wide variety of mobile and edge devices. For everything from prompt engineering to using Llama 3 with LangChain we have a comprehensive getting started guide and takes you from downloading Llama 3 all the way to deployment at scale within your generative AI application.

A system-level approach to responsibility

We have designed Llama 3 models to be maximally helpful while ensuring an industry leading approach to responsibly deploying them. To achieve this, we have adopted a new, system-level approach to the responsible development and deployment of Llama. We envision Llama models as part of a broader system that puts the developer in the driver’s seat. Llama models will serve as a foundational piece of a system that developers design with their unique end goals in mind.

artificial intelligence vs human intelligence essay pdf

Instruction fine-tuning also plays a major role in ensuring the safety of our models. Our instruction-fine-tuned models have been red-teamed (tested) for safety through internal and external efforts. ​​Our red teaming approach leverages human experts and automation methods to generate adversarial prompts that try to elicit problematic responses. For instance, we apply comprehensive testing to assess risks of misuse related to Chemical, Biological, Cyber Security, and other risk areas. All of these efforts are iterative and used to inform safety fine-tuning of the models being released. You can read more about our efforts in the model card .

Llama Guard models are meant to be a foundation for prompt and response safety and can easily be fine-tuned to create a new taxonomy depending on application needs. As a starting point, the new Llama Guard 2 uses the recently announced MLCommons taxonomy, in an effort to support the emergence of industry standards in this important area. Additionally, CyberSecEval 2 expands on its predecessor by adding measures of an LLM’s propensity to allow for abuse of its code interpreter, offensive cybersecurity capabilities, and susceptibility to prompt injection attacks (learn more in our technical paper ). Finally, we’re introducing Code Shield which adds support for inference-time filtering of insecure code produced by LLMs. This offers mitigation of risks around insecure code suggestions, code interpreter abuse prevention, and secure command execution.

With the speed at which the generative AI space is moving, we believe an open approach is an important way to bring the ecosystem together and mitigate these potential harms. As part of that, we’re updating our Responsible Use Guide (RUG) that provides a comprehensive guide to responsible development with LLMs. As we outlined in the RUG, we recommend that all inputs and outputs be checked and filtered in accordance with content guidelines appropriate to the application. Additionally, many cloud service providers offer content moderation APIs and other tools for responsible deployment, and we encourage developers to also consider using these options.

Deploying Llama 3 at scale

Llama 3 will soon be available on all major platforms including cloud providers, model API providers, and much more. Llama 3 will be everywhere .

Our benchmarks show the tokenizer offers improved token efficiency, yielding up to 15% fewer tokens compared to Llama 2. Also, Group Query Attention (GQA) now has been added to Llama 3 8B as well. As a result, we observed that despite the model having 1B more parameters compared to Llama 2 7B, the improved tokenizer efficiency and GQA contribute to maintaining the inference efficiency on par with Llama 2 7B.

For examples of how to leverage all of these capabilities, check out Llama Recipes which contains all of our open source code that can be leveraged for everything from fine-tuning to deployment to model evaluation.

What’s next for Llama 3?

The Llama 3 8B and 70B models mark the beginning of what we plan to release for Llama 3. And there’s a lot more to come.

Our largest models are over 400B parameters and, while these models are still training, our team is excited about how they’re trending. Over the coming months, we’ll release multiple models with new capabilities including multimodality, the ability to converse in multiple languages, a much longer context window, and stronger overall capabilities. We will also publish a detailed research paper once we are done training Llama 3.

To give you a sneak preview for where these models are today as they continue training, we thought we could share some snapshots of how our largest LLM model is trending. Please note that this data is based on an early checkpoint of Llama 3 that is still training and these capabilities are not supported as part of the models released today.

artificial intelligence vs human intelligence essay pdf

We’re committed to the continued growth and development of an open AI ecosystem for releasing our models responsibly. We have long believed that openness leads to better, safer products, faster innovation, and a healthier overall market. This is good for Meta, and it is good for society. We’re taking a community-first approach with Llama 3, and starting today, these models are available on the leading cloud, hosting, and hardware platforms with many more to come.

Try Meta Llama 3 today

We’ve integrated our latest models into Meta AI, which we believe is the world’s leading AI assistant. It’s now built with Llama 3 technology and it’s available in more countries across our apps.

You can use Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web to get things done, learn, create, and connect with the things that matter to you. You can read more about the Meta AI experience here .

Visit the Llama 3 website to download the models and reference the Getting Started Guide for the latest list of all available platforms.

You’ll also soon be able to test multimodal Meta AI on our Ray-Ban Meta smart glasses.

As always, we look forward to seeing all the amazing products and experiences you will build with Meta Llama 3.

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

artificial intelligence vs human intelligence essay pdf

Product experiences

Foundational models

Latest news

Meta © 2024

Human vs. Artificial Intelligence

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. Human Intelligence Vs Artificial Intelligence

    artificial intelligence vs human intelligence essay pdf

  2. Artificial Intelligence vs Human Intelligence

    artificial intelligence vs human intelligence essay pdf

  3. What is Artificial Intelligence Free Essay Example

    artificial intelligence vs human intelligence essay pdf

  4. Essay on Artificial Intelligence

    artificial intelligence vs human intelligence essay pdf

  5. Artificial Intelligence vs Human Intelligence: Difference and Comparison

    artificial intelligence vs human intelligence essay pdf

  6. Artificial Intelligence vs. Human Intelligence

    artificial intelligence vs human intelligence essay pdf

VIDEO

  1. Artificial intelligence vs Human intelligence

  2. Human vs AI 🤖

  3. Artificial intelligence,essay

  4. Artificial intelligence,essay

  5. The Future of Artificial Intelligence #elonmusk #joerogan

  6. Ai Vs Human || Artificial Intelligence Vs Human Brain 6 Tips To Improve Your Communication Skill

COMMENTS

  1. (PDF) Human- versus Artificial Intelligence

    Artificial Intelligence or AI is a simulation of human intelligence applied to a computer system or other machine device so that the device has a way of thinking like humans ( J. E. Korteling et ...

  2. PDF The Cambridge Handbook of Artificial Intelligence

    Artificial Intelligence Artificial intelligence, or AI, is a cross-disciplinary approach to understanding, modeling, and creating intelligence of various forms. It is a critical branch of cognitive science, and its influence is increasingly being felt in other areas, including the humanities. AI applications are

  3. "Artificial Intelligence" Vs "Human Intelligence": a New Ethics of

    This thesis refers to the introduction of artificial intelligence technologies, aimed at replacing the functions associated with human cognitive activity. It seems that society in the development of scientific and technological progress has approached a dangerous line, beyond which the uncontrolled introduction of cognitive technologies is ...

  4. Human vs. Artificial Intelligence

    In this essay we compare human and artificial intelligence from two points of view: computational and neuroscience. We discuss the differences and limitations of AI with respect to our intelligence, ending with three challenging areas that are already with us: neural technologies, responsible AI, and hybrid AI systems.

  5. Human vs. Artificial Intelligence

    This essay discusses the differences and limitations of AI with respect to the authors' intelligence, ending with three challenging areas that are already with us: neural technologies, responsible AI, and hybrid AI systems. In this essay we compare human and artificial intelligence from two points of view: computational and neuroscience. We discuss the differences and limitations of AI with ...

  6. AI Should Augment Human Intelligence, Not Replace It

    In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to ...

  7. PDF Artificial Intelligence and Life in 2030

    Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and unpredictable, there have been significant

  8. Frontiers

    AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human ...

  9. AI vs Humans

    The great majority of books on artificial intelligence are written by AI experts who understandably focus on its achievements and potential transformative effects on society. In contrast, AI vs Humans is written by two psychologists (Michael and Christine Eysenck) whose perspective on AI (including robotics) is based on their knowledge and ...

  10. PDF A Critical Review of Artificial Intelligence Vs Human Intelligence

    this critical review aims to contribute to a nuanced understanding of the complex relationship between artificial intelligence and human intelligence, offering insights for policymakers, researchers, and the general public alike. Keywords: Artificial Intelligence, Emotional intelligence, Human intelligence, Reasoning, Planning 1.

  11. Artificial intelligence vs. human intelligence: Differences explained

    Many researchers feel that this difference is a strong basis for describing humans as being, on average, much more efficient learners than AI systems. Artificial intelligence is humanlike, but there are fundamental differences between natural and artificial intelligence. 2. Imagination and recitation. Human intelligence.

  12. Combining Human and Artificial Intelligence: Hybrid Problem-Solving in

    Organizations increasingly use artificial intelligence (AI) to solve previously unexplored problems. While routine tasks can be automated, the intricate nature of exploratory tasks, such as solving new problems, demands a hybrid approach that integrates human intelligence with AI. We argue that the outcomes of this human-AI collaboration are contingent on the processes employed to combine ...

  13. PDF AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE

    2 • Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals, such as "learning" and "problem solving. . In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and

  14. Artificial Intelligence vs. Human Intelligence

    Essence. The purpose of human intelligence is to combine a range of cognitive activities in order to adapt to new circumstances. The goal of artificial intelligence (AI) is to create computers that are able to behave like humans and complete jobs that humans would normally do. Functionality. People make use of the memory, processing ...

  15. The impact of artificial intelligence on human society and bioethics

    Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate.

  16. The Human Intelligence vs Artificial Intelligence

    Mohammed A Ali. In this study, the researcher has advocated the importance of human intelligence in language learning since software or any Learning Management System (LMS) cannot be programmed to understand the human context as well as all the linguistic structures contextually. This study examined the extent to which language learning is ...

  17. Difference Between Artificial Intelligence and Human Intelligence

    Advantages of Human Intelligence (HI): HI has creativity, intuition, and emotional intelligence that AI lacks. HI can adapt to new and unexpected situations. HI can provide ethical and moral considerations in decision-making. Disadvantages of Human Intelligence (HI): HI is limited by its physical and mental capabilities.

  18. Introducing Meta Llama 3: The most capable openly available LLM to date

    The chart below shows aggregated results of our human evaluations across of these categories and prompts against Claude Sonnet, Mistral Medium, and GPT-3.5. Preference rankings by human annotators based on this evaluation set highlight the strong performance of our 70B instruction-following model compared to competing models of comparable size ...

  19. Human vs. Artificial Intelligence

    Abstract: In this essay we compare human and artificial intelligence from two points of view: computational and neuroscience. We discuss the differences and limitations of AI with respect to our intelligence, ending with three challenging areas that are already with us: neural technologies, responsible AI, and hybrid AI systems.