May 25, 2023

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not

Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity

By Tamlyn Hunt

Human face behind binary code

devrimb/Getty Images

“The idea that this stuff could actually get smarter than people.... I thought it was way off…. Obviously, I no longer think that,” Geoffrey Hinton, one of Google's top artificial intelligence scientists, also known as “ the godfather of AI ,” said after he quit his job in April so that he can warn about the dangers of this technology .

He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a researcher in consciousness, I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Why are we all so concerned? In short: AI development is going way too fast.

The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right.

If we get it wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention. It will do this in the same way that, for example, Google’s AlphaZero AI learned how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

A team of Microsoft researchers analyzing OpenAI’s GPT-4 , which I think is the best of the new advanced chatbots currently available, said it had, "sparks of advanced general intelligence" in a new preprint paper .

In testing GPT-4, it performed better than 90 percent of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10 percent in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning. This is the main reason why Bubeck and his team concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

This pace of change is why Hinton told the New York Times : "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation “crucial.”

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will—and this is what I worry about the most—be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.

This is known as the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book Superintelligence for a good overview ) and has been studied and argued about by philosophers and scientists, such as Bostrom, Seth Baum and Eliezer Yudkowsky , for decades now.

I think of it this way: Why would we expect a newborn baby to beat a grandmaster in chess? We wouldn’t. Similarly, why would we expect to be able to control superintelligent AI systems? (No, we won’t be able to simply hit the off switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut off.)

Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second.

Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace.

Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.

We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us. Any defenses we’ve built in will be undone, like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him.

Some argue that these LLMs are just automation machines with zero consciousness , the implication being that if they’re not conscious they have less chance of breaking free from their programming. Even if these language models, now or in the future, aren’t at all conscious, this doesn’t matter. For the record, I agree that it’s unlikely that they have any actual consciousness at this juncture—though I remain open to new facts as they come in.

Regardless, a nuclear bomb can kill millions without any consciousness whatsoever. In the same way, AI could kill millions with zero consciousness, in a myriad ways, including potentially use of nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).

So, the debates about consciousness and AI really don’t figure very much into the debates about AI safety.

Yes, language models based on GPT-4 and many other models are already circulating widely . But the moratorium being called for is to stop development of any new models more powerful than 4.0—and this can be enforced, with force if required. Training these more powerful models requires massive server farms and energy. They can be shut down.

My ethical compass tells me that it is very unwise to create these systems when we know already we won’t be able to control them, even in the relatively near future. Discernment is knowing when to pull back from the edge. Now is that time.

We should not open Pandora’s box any more than it already has been opened.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of  Scientific American.

Caltech

Ask a Caltech Expert: Yaser Abu-Mostafa on Artificial Intelligence

This article was reviewed by a Caltech researcher.

ChatGPT has rocked the general public's, awareness, perception, and expectations of artificial intelligence (AI). In this Q&A, adapted from his Watson Lecture delivered on May 24, 2023, computer scientist Yaser Abu-Mostafa explains the history of AI and explores its risks and benefits.

Amid warnings that "AI will kill us all," or boasts that "AI will solve all our problems," a closer look at the science behind the technology can help us identify what is realistic and what is speculative, and help guide planning, legislation, and investment.

Highlights from the lecture are below.

The questions and answers below have been edited for clarity and length.

How did AI grow into the technology we know today?

The artificial intelligence (AI) we see today is the product of the field's journey from simple, brute force methodologies to complex, learning-based models that closely mimic the human brain's functionality. Early AI was effective for specific tasks like playing chess or Jeopardy! , but it was limited by the necessity of pre-programming every possible scenario. These systems, though groundbreaking, highlighted AI's limitations in flexibility and adaptability.

The transformative shift occurred in the 1980s with the move from brute force to learning approaches. This pivot was inspired by a deeper understanding of the learning process in the human brain. This era ushered in the development of neural networks: systems capable of learning from unstructured data without explicit programming for every scenario.

The historical development of AI reflects a continual effort to mirror the essence of human intelligence and learning. This evolution underscores the field's original goal: to create machines that can learn, adapt, and potentially think with a level of autonomy that was once the realm of science fiction.

What is the difference between discriminative and generative models in AI, and how is each type used?

The distinction lies in their approach to understanding and generating data. Discriminative models aim to categorize or differentiate between different types of data inputs. A common application of discriminative models is in facial recognition systems, where the model identifies who a particular face belongs to by learning from a dataset of labeled faces. This capability is applied in security systems, personalized user experiences, and verification processes.

On the other hand, generative models are designed to generate new data that resembles the training data. These models learn the underlying distribution of a dataset and can produce novel data points with similar characteristics. A notable application of generative models is in content creation, where they can generate realistic images, text, or even data for training other AI models. Generative models can contribute to fields such as pharmaceuticals, where they can help in discovering new molecular structures.

Do you worry about AI systems going rogue?

The perceived threat of rogue AI systems is a topic of considerable debate, fueled by speculative fiction and theoretical scenarios rather than grounded in the current capabilities and design of AI technologies. The concern revolves around the potential for AI systems to act autonomously in ways not intended or predicted by their creators, potentially causing harm to individuals, societies, or humanity at large. However, understanding the nature of this threat requires a nuanced consideration of what AI currently is and what it might become.

AI, as it exists today, operates within the confines of specific tasks it is designed for, lacking consciousness, desires, or intentions. AI has no intentions—no good intentions, no bad intentions. It learns what you teach it, period.

AI systems, including the most advanced neural networks, are tools created, controlled, and maintained by humans. The notion of AI going "rogue" and acting against human interests overlooks the practical and logistical constraints involved in developing and training AI systems. These activities require substantial human oversight, resources, and infrastructure, from gathering and preprocessing data to designing and adjusting algorithms. AI systems do not have the capability to access, manipulate, or control these resources independently.

In my opinion, the potential misuse of AI by humans poses a more immediate and practical concern. The development and deployment of AI in ways that are unethical, unregulated, or intended to deceive or harm, such as in autonomous weaponry, surveillance, or spreading misinformation, represent real challenges.

These issues underscore the importance of ethical AI development, robust regulatory frameworks, and international cooperation to ensure AI technologies are used for the benefit of humanity.

Why is regulating the deployment and development of AI challenging? What suggestions do you have for effective regulation to prevent misuse?

One significant hurdle is the pace at which AI technologies progress, outpacing regulatory frameworks and the understanding of policymakers.

The diverse applications of AI, from health care to autonomous vehicles, each bring their own set of ethical, safety, and privacy concerns, complicating the creation of a one-size-fits-all regulatory approach.

Additionally, the global nature of AI development, with contributions from academia, industry, and open-source communities worldwide, necessitates international cooperation in regulatory efforts, further complicating the process.

An effective regulatory framework for AI must navigate the delicate balance between preventing misuse and supporting innovation. It should address the ethical and societal implications of AI, such as bias, accountability, and the impact on employment while also fostering an environment that encourages technological advancement and economic growth.

I have one suggestion in terms of legislation that may at least put the brakes on the explosion of AI-related crimes in the coming years until we figure out what tailored legislation toward the crimes may be possible. What I suggest is to make the use of AI in a crime an  aggravating circumstance . Carrying a gun in and of itself may not be a crime. However, if you commit a robbery, it makes a lot of difference whether you are carrying a gun or not. It's an aggravating circumstance that makes the penalty go up significantly, and it stands to logic because now there is a greater existential threat. By classifying the utilization of AI in criminal activities as an aggravating factor, the legal system can impose harsher penalties on those who exploit AI for malicious purposes.

Why is it crucial for the global community to actively pursue AI research and innovation?

The future of AI should not be dictated by a handful of entities but developed through a global collaborative effort. Just as scientific endeavors like the LIGO project brought minds together to achieve what was once thought impossible [detecting gravitational waves], AI research demands a similar collective effort. We stand on the brink of discoveries that could redefine our understanding of intelligence, biology, and more. It's essential that we pursue these horizons together, ensuring the benefits of AI are shared widely and ethically.

Pausing or halting development efforts could inadvertently advantage those with malicious intent. If responsible researchers and developers were to cease their work in AI, it does not equate to a universal halt in AI advancement. If you put a moratorium on the development of AI, the good guys will abide by it and the bad guys will not. So, all we are achieving is giving the bad guys a "head start" to further their own agendas, potentially leading to the development and deployment of AI systems that are unethical, biased, or designed to harm. The development of AI technologies by those committed to ethical standards, transparency, and the public good acts as a counterbalance to potential misuse.

What potential does AI hold for the future, especially in terms of enhancing human capabilities rather than replacing them?

AI's role in automating routine and repetitive tasks frees humans to focus on more creative and strategic activities, thus elevating the nature of work and enabling new avenues for innovation. By removing mundane tasks, AI allows individuals to engage more deeply with the aspects of their work that require human insight, empathy, and creativity.

This shift not only has the potential to increase job satisfaction but also to drive forward industries and sectors with fresh ideas and approaches. The promise of AI lies not in replacing human capabilities but in significantly augmenting them, opening up a future where humans and machines collaborate to address some of the most pressing challenges facing the world today.

You can submit your own questions to the Caltech Science Exchange.

AI Is Not Actually an Existential Threat to Humanity, Scientists Say

artificial intelligence is not dangerous essay

We encounter artificial intelligence (AI) every day. AI describes computer systems that are able to perform tasks that normally require human intelligence. When you search something on the internet, the top results you see are decided by AI.

Any recommendations you get from your favorite shopping or streaming websites will also be based on an AI algorithm. These algorithms use your browser history to find things you might be interested in.

Because targeted recommendations are not particularly exciting, science fiction prefers to depict AI as super-intelligent robots that overthrow humanity. Some people believe this scenario could one day become reality. Notable figures, including the late Stephen Hawking , have expressed fear about how future AI could threaten humanity.

To address this concern we asked 11 experts in AI and Computer Science "Is AI an existential threat to humanity?" There was an 82 percent consensus that it is not an existential threat. Here is what we found out.

How close are we to making AI that is more intelligent than us?

The AI that currently exists is called 'narrow' or 'weak' AI . It is widely used for many applications like facial recognition, self-driving cars, and internet recommendations. It is defined as 'narrow' because these systems can only learn and perform very specific tasks.

They often actually perform these tasks better than humans – famously, Deep Blue was the first AI to beat a world chess champion in 1997 – however they cannot apply their learning to anything other than a very specific task (Deep Blue can only play chess).

Another type of AI is called Artificial General Intelligence (AGI). This is defined as AI that mimics human intelligence, including the ability to think and apply intelligence to multiple different problems. Some people believe that AGI is inevitable and will happen imminently in the next few years.

Matthew O'Brien, robotics engineer from the Georgia Institute of Technology disagrees , "the long-sought goal of a 'general AI' is not on the horizon. We simply do not know how to make a general adaptable intelligence, and it's unclear how much more progress is needed to get to that point".

How could a future AGI threaten humanity?

Whilst it is not clear when or if AGI will come about, can we predict what threat they might pose to us humans? AGI learns from experience and data as opposed to being explicitly told what to do. This means that, when faced with a new situation it has not seen before, we may not be able to completely predict how it reacts.

Dr Roman Yampolskiy, computer scientist from Louisville University also believes that "no version of human control over AI is achievable" as it is not possible for the AI to both be autonomous and controlled by humans. Not being able to control super-intelligent systems could be disastrous.

Yingxu Wang, professor of Software and Brain Sciences from Calgary University disagrees, saying that "professionally designed AI systems and products are well constrained by a fundamental layer of operating systems for safeguard users' interest and wellbeing, which may not be accessed or modified by the intelligent machines themselves."

Dr O'Brien adds "just like with other engineered systems, anything with potentially dangerous consequences would be thoroughly tested and have multiple redundant safety checks."

Could the AI we use today become a threat?

Many of the experts agreed that AI could be a threat in the wrong hands. Dr George Montanez, AI expert from Harvey Mudd College highlights that "robots and AI systems do not need to be sentient to be dangerous; they just have to be effective tools in the hands of humans who desire to hurt others. That is a threat that exists today."

Even without malicious intent, today's AI can be threatening. For example, racial biases have been discovered in algorithms that allocate health care to patients in the US. Similar biases have been found in facial recognition software used for law enforcement. These biases have wide-ranging negative impacts despite the 'narrow' ability of the AI.

AI bias comes from the data it is trained on. In the cases of racial bias, the training data was not representative of the general population. Another example happened in 2016, when an AI-based chatbox was found sending highly offensive and racist content. This was found to be because people were sending the bot offensive messages, which it learnt from.

The takeaway:

The AI that we use today is exceptionally useful for many different tasks.

That doesn't mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.

Article based on 11 expert answers to this question: Is AI an existential threat to humanity?

This expert response was published in partnership with independent fact-checking platform Metafact.io . Subscribe to their weekly newsletter here .

artificial intelligence is not dangerous essay

The Case Against AI Everything, Everywhere, All at Once

Neuron system

I cringe at being called “Mother of the Cloud, " but having been part of the development and implementation of the internet and networking industry—as an entrepreneur, CTO of Cisco, and on the boards of Disney and FedEx—I am fortunate to have had a 360-degree view of the technologies that are at the foundation of our modern world.

I have never had such mixed feelings about technological innovation. In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is “Authoritarian Intelligence.” The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing today’s attention, machine learning, has expanded beyond predicting our behavior to generating content—called Generative AI. The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicry—and sometimes fakery—over deep creativity, accuracy, or empathy.

The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “.. . a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Read More: AI's Long-term Risks Shouldn't Makes Us Miss Present Risks

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse .

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value.

The different types and levels of risks are overwhelming, and we need to focus on all of them: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trust—with few viable solutions. We do not yet fully understand risks to our society at large such as the level and pace of job loss, environmental impacts , and whether we want opaque systems making decisions for us.

Deeper risks question the very aspects of humanity. When we prioritize “intelligence” to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity .

Human well-being and dignity should be our North Star—with innovation in a supporting role. We can learn from the open systems environment of the 1970s and 80s. When we were first developing the infrastructure of the internet , power was distributed between large and small companies, vendors and customers, government and business. These checks and balances led to better decisions and less risk.

AI everything, everywhere, all at once , is not inevitable, if we use our powers to question the tools and the people shaping them. Private and public sector leaders can slow the frenzy through acts of friction; simply not giving in to the “Authoritarian Intelligence” emanating out of Silicon Valley, and our collective group think.

We can buy the time needed to develop impactful national and international policy that distributes power and protects human rights, and inspire independent funding and ethics guidelines for a vibrant research community that will fuel innovation.

With the right priorities and guardrails, AI can help advance science, cure diseases, build new industries, expand joy, and maintain human dignity and the differences that make us unique.

More Must-Reads from TIME

  • How Joe Biden Leads
  • Do Less. It’s Good for You
  • There's Something Different About Will Smith
  • What Animal Studies Are Revealing About Their Minds—and Ours
  • What a Hospice Nurse Wants You to Know About Death
  • 15 LGBTQ+ Books to Read for Pride
  • TIME100 Most Influential Companies 2024
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Contact us at [email protected]

News from Brown

New report assesses progress and risks of artificial intelligence.

A report by a panel of experts chaired by a Brown professor concludes that AI has made a major leap from the lab to people’s lives in recent years, which increases the urgency to understand its potential negative effects.

Artificial intelligence has left the lab and entered people's lives in new ways, according to a new report on the state of the field. Credit: Nick Dentamaro/Brown University

PROVIDENCE, R.I. [Brown University] — Artificial intelligence has reached a critical turning point in its evolution, according to a new report by an international panel of experts assessing the state of the field. 

Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people’s lives on a daily basis — from helping people to choose a movie to aiding in medical diagnoses. With that success, however, comes a renewed urgency to understand and mitigate the risks and downsides of AI-driven systems, such as algorithmic discrimination or use of AI for deliberate deception. Computer scientists must work with experts in the social sciences and law to assure that the pitfalls of AI are minimized.

Those conclusions are from a report titled “ Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report ,” which was compiled by a panel of experts from computer science, public policy, psychology, sociology and other disciplines. AI100 is an ongoing project hosted by the Stanford University Institute for Human-Centered Artificial Intelligence that aims to monitor the progress of AI and guide its future development. This new report, the second to be released by the AI100 project, assesses developments in AI between 2016 and 2021.

“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” said Michael Littman, a professor of computer science at Brown University who chaired the report panel. “That’s really exciting, because this technology is doing some amazing things that we could only dream about five or 10 years ago. But at the same time, the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”

The report, released on Thursday, Sept. 16, is structured to answer a set of 14  questions probing critical areas of AI development. The questions were developed by the AI100 standing committee consisting of a renowned group of AI leaders. The committee then assembled a panel of 17 researchers and experts to answer them. The questions include “What are the most important advances in AI?” and “What are the most inspiring open grand challenges?” Other questions address the major risks and dangers of AI, its effects on society, its public perception and the future of the field.

We now have people who do work in a wide variety of different areas who are rightly considered AI experts. That’s a positive trend.

Image of Michael Littman

“While many reports have been written about the impact of AI over the past several years, the AI100 reports are unique in that they are both written by AI insiders — experts who create AI algorithms or study their influence on society as their main professional activity — and that they are part of an ongoing, longitudinal, century-long study,” said Peter Stone, a professor of computer science at the University of Texas at Austin, executive director of Sony AI America and chair of the AI100 standing committee. “The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what's changed in the intervening five years. It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to reevaluate at five-year intervals.”

Eric Horvitz, chief scientific officer at Microsoft and co-founder of the One Hundred Year Study on AI, praised the work of the study panel.

"I'm impressed with the insights shared by the diverse panel of AI experts on this milestone report," Horvitz said. “The 2021 report does a great job of describing where AI is today and where things are going, including an assessment of the frontiers of our current understandings and guidance on key opportunities and challenges ahead on the influences of AI on people and society.”

In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have made the leap in recent years from the academic setting to everyday applications. 

In the area of natural language processing, for example, AI-driven systems are now able to not only recognize words, but understand how they’re used grammatically and how meanings can change in different contexts. That has enabled better web search, predictive text apps, chatbots and more. Some of these systems are now capable of producing original text that is difficult to distinguish from human-produced text.

Elsewhere, AI systems are diagnosing cancers and other conditions with accuracy that rivals trained pathologists. Research techniques using AI have produced new insights into the human genome and have sped the discovery of new pharmaceuticals. And while the long-promised self-driving cars are not yet in widespread use, AI-based driver-assist systems like lane-departure warnings and adaptive cruise control are standard equipment on most new cars. 

Some recent AI progress may be overlooked by observers outside the field, but actually reflect dramatic strides in the underlying AI technologies, Littman says. One relatable example is the use of background images in video conferences, which became a ubiquitous part of many people's work-from-home lives during the COVID-19 pandemic. 

“To put you in front of a background image, the system has to distinguish you from the stuff behind you — which is not easy to do just from an assemblage of pixels,” Littman said. “Being able to understand an image well enough to distinguish foreground from background is something that maybe could happen in the lab five years ago, but certainly wasn’t something that could happen on everybody’s computer, in real time and at high frame rates. It’s a pretty striking advance.”

As for the risks and dangers of AI, the panel does not envision a dystopian scenario in which super-intelligent machines take over the world. The real dangers of AI are a bit more subtle, but are no less concerning. 

Some of the dangers cited in the report stem from deliberate misuse of AI — deepfake images and video used to spread misinformation or harm people’s reputations, or online bots used to manipulate public discourse and opinion. Other dangers stem from “an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination,” the panel writes. This is a particular concern in areas like law enforcement, where crime prediction systems have been shown to adversely affect communities of color, or in health care, where embedded racial bias in insurance algorithms can affect people’s access to appropriate care. 

As the use of AI increases, these kinds of problems are likely to become more widespread. The good news, Littman says, is that the field is taking these dangers seriously and actively seeking input from experts in psychology, public policy and other fields to explore ways of mitigating them. The makeup of the panel that produced the report reflects the widening perspective coming to the field, Littman says.

“The panel consists of almost half social scientists and half computer science people, and I was very pleasantly surprised at how deep the knowledge about AI is among the social scientists,” Littman said. “We now have people who do work in a wide variety of different areas who are rightly considered AI experts. That’s a positive trend.”

Moving forward, the panel concludes that governments, academia and industry will need to play expanded roles in making sure AI evolves to serve the greater good.

Related news:

In a significant first, researchers detect water frost on solar system’s tallest volcanoes, breaking ground: could geometry offer a new explanation for why earthquakes happen, brown ranks among top 100 universities nationally in new patents issued.

One Hundred Year Study on Artificial Intelligence (AI100)

SQ10. What are the most pressing dangers of AI?

Main navigation, related documents.

2019 Workshops

2020 Study Panel Charge

Download Full Report  

AAAI 2022 Invited Talk

Stanford HAI Seminar 2023

As AI systems prove to be increasingly beneficial in real-world applications, they have broadened their reach, causing risks of misuse, overuse, and explicit abuse to proliferate. As AI systems increase in capability and as they are integrated more fully into societal infrastructure, the implications of losing meaningful control over them become more concerning. 1 New research efforts are aimed at re-conceptualizing the foundations of the field to make AI systems less reliant on explicit, and easily misspecified, objectives. 2 A particularly visible danger is that AI can make it easier to build machines that can spy and even kill at scale . But there are many other important and subtler dangers at present.

In this section

Techno-solutionism, dangers of adopting a statistical perspective on justice, disinformation and threat to democracy, discrimination and risk in the medical setting.

One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool. 3 As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases. But technology often creates larger problems in the process of solving smaller ones. For example, systems that streamline and automate the application of social services can quickly become rigid and deny access to migrants or others who fall between the cracks. 4

When given the choice between algorithms and humans, some believe algorithms will always be the less-biased choice. Yet, in 2018, Amazon found it necessary to discard a proprietary recruiting tool because the historical data it was trained on resulted in a system that was systematically biased against women. 5 Automated decision-making can often serve to replicate, exacerbate, and even magnify the same bias we wish it would remedy.

Indeed, far from being a cure-all, technology can actually create feedback loops that worsen discrimination. Recommendation algorithms, like Google’s page rank, are trained to identify and prioritize the most “relevant” items based on how other users engage with them. As biased users feed the algorithm biased information, it responds with more bias, which informs users’ understandings and deepens their bias, and so on. 6 Because all technology is the product of a biased system, 7 techno-solutionism’s flaws run deep: 8 a creation is limited by the limitations of its creator.

Automated decision-making may produce skewed results that replicate and amplify existing biases. A potential danger, then, is when the public accepts AI-derived conclusions as certainties. This determinist approach to AI decision-making can have dire implications in both criminal and healthcare settings. AI-driven approaches like PredPol, software originally developed by the Los Angeles Police Department and UCLA that purports to help protect one in 33 US citizens, 9 predict when, where, and how crime will occur. A 2016 case study of a US city noted that the approach disproportionately projected crimes in areas with higher populations of non-white and low-income residents. 10 When datasets disproportionately represents the lower power members of society, flagrant discrimination is a likely result.

Sentencing decisions are increasingly decided by proprietary algorithms that attempt to assess whether a defendant will commit future crimes, leading to concerns that justice is being outsourced to software. 11 As AI becomes increasingly capable of analyzing more and more factors that may correlate with a defendant's perceived risk, courts and society at large may mistake an algorithmic probability for fact. This dangerous reality means that an algorithmic estimate of an individual’s risk to society may be interpreted by others as a near certainty—a misleading outcome even the original tool designers warned against. Even though a statistically driven AI system could be built to report a degree of credence along with every prediction, 12 there’s no guarantee that the people using these predictions will make intelligent use of them. Taking probability for certainty means that the past will always dictate the future.

An original image of low resolution and the resulting image of high resolution

There is an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination. All data insights rely on some measure of interpretation. As a concrete example, an audit of a resume-screening tool found that the two main factors it associated most strongly with positive future job performance were whether the applicant was named Jared, and whether he played high school lacrosse. 13 Undesirable biases can be hidden behind both the opaque nature of the technology used and the use of proxies, nominally innocent attributes that enable a decision that is fundamentally biased. An algorithm fueled by data in which gender, racial, class, and ableist biases are pervasive can effectively reinforce these biases without ever explicitly identifying them in the code. 

Without transparency concerning either the data or the AI algorithms that interpret it, the public may be left in the dark as to how decisions that materially impact their lives are being made. Lacking adequate information to bring a legal claim, people can lose access to both due process and redress when they feel they have been improperly or erroneously judged by AI systems. Large gaps in case law make applying Title VII—the primary existing legal framework in the US for employment discrimination—to cases of algorithmic discrimination incredibly difficult. These concerns are exacerbated by algorithms that go beyond traditional considerations such as a person’s credit score to instead consider any and all variables correlated to the likelihood that they are a safe investment. A statistically significant correlation has been shown among Europeans between loan risk and whether a person uses a Mac or PC and whether they include their name in their email address—which turn out to be proxies for affluence. 14 Companies that use such attributes, even if they do indeed provide improvements in model accuracy, may be breaking the law when these attributes also clearly correlate with a protected class like race. Loss of autonomy can also result from AI-created “information bubbles” that narrowly constrict each individual’s online experience to the point that they are unaware that valid alternative perspectives even exist.

AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, 15 there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage. Disinformation poses serious threats to society, as it effectively changes and manipulates evidence to create social feedback loops that undermine any sense of objective truth. The debates about what is real quickly evolve into debates about who gets to decide what is real, resulting in renegotiations of power structures that often serve entrenched interests. 16

While personalized medicine is a good potential application of AI, there are dangers. Current business models for AI-based health applications tend to focus on building a single system—for example, a deterioration predictor—that can be sold to many buyers. However, these systems often do not generalize beyond their training data. Even differences in how clinical tests are ordered can throw off predictors, and, over time, a system’s accuracy will often degrade as practices change. Clinicians and administrators are not well-equipped to monitor and manage these issues, and insufficient thought given to the human factors of AI integration has led to oscillation between mistrust of the system (ignoring it) and over-reliance on the system (trusting it even when it is wrong), a central concern of the 2016 AI100 report.

These concerns are troubling in general in the high-risk setting that is healthcare, and even more so because marginalized populations—those that already face discrimination from the health system from both structural factors (like lack of access) and scientific factors (like guidelines that were developed from trials on other populations)—may lose even more. Today and in the near future, AI systems built on machine learning are used to determine post-operative personalized pain management plans for some patients and in others to predict the likelihood that an individual will develop breast cancer. AI algorithms are playing a role in decisions concerning distributing organs, vaccines, and other elements of healthcare. Biases in these approaches can have literal life-and-death stakes.

In 2019, the story broke that Optum, a health-services algorithm used to determine which patients may benefit from extra medical care, exhibited fundamental racial biases. The system designers ensured that race was precluded from consideration, but they also asked the algorithm to consider the future cost of a patient to the healthcare system. 17 While intended to capture a sense of medical severity, this feature in fact served as a proxy for race: controlling for medical needs, care for Black patients averages $1,800 less per year.

New technologies are being developed every day to treat serious medical issues. A new algorithm trained to identify melanomas was shown to be more accurate than doctors in a recent study, but the potential for the algorithm to be biased against Black patients is significant as the algorithm was trained using majority light-skinned groups. 18 The stakes are especially high for melanoma diagnoses, where the five-year survival rate is 17 percentage points less for Black Americans than white. While technology has the potential to generate quicker diagnoses and thus close this survival gap, a machine-learning algorithm is only as good as its data set. An improperly trained algorithm could do more harm than good for patients at risk, missing cancers altogether or generating false positives. As new algorithms saturate the market with promises of medical miracles, losing sight of the biases ingrained in their outcomes could contribute to a loss of human biodiversity, as individuals who are left out of initial data sets are denied adequate care. While the exact long-term effects of algorithms in healthcare are unknown, their potential for bias replication means any advancement they produce for the population in aggregate—from diagnosis to resource distribution—may come at the expense of the most vulnerable.

[1]  Brian Christian, The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company, 2020

[2]   https://humancompatible.ai/app/uploads/2020/11/CHAI-2020-Progress-Report-public-9-30.pdf  

[3]   https://knightfoundation.org/philanthropys-techno-solutionism-problem/  

[4]   https://www.theguardian.com/world/2021/jan/12/french-woman-spends-three-years-trying-to-prove-she-is-not-dead ; https://virginia-eubanks.com/ (“Automating inequality”)

[5]   https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[6]  Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism , NYU Press, 2018 

[7]  Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code , Polity, 2019

[8]   https://www.publicbooks.org/the-folly-of-technological-solutionism-an-interview-with-evgeny-morozov/

[9]   https://predpol.com/about  

[10]  Kristian Lum and William Isaac, “To predict and serve?” Significance , October 2016, https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2016.00960.x

[11]  Jessica M. Eaglin, “Technologically Distorted Conceptions of Punishment,” https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=3862&context=facpub  

[12]  Riccardo Fogliato, Maria De-Arteaga, and Alexandra Chouldechova, “Lessons from the Deployment of an Algorithmic Tool in Child Welfare,” https://fair-ai.owlstown.net/publications/1422  

[13]   https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/  

[14]   https://www.fdic.gov/analysis/cfr/2018/wp2018/cfr-wp2018-04.pdf  

[15]  Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova, “Truth, Lies, and Automation,” https://cset.georgetown.edu/publication/truth-lies-and-automation/  

[16]  Britt Paris and Joan Donovan, “Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence,” https://datasociety.net/library/deepfakes-and-cheap-fakes/  

[17]   https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/ .

[18]   https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc:  http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel  

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International):  https://creativecommons.org/licenses/by-nd/4.0/ .

Elsevier QRcode Wechat

  • Research Process

To Err is Not Human: The Dangers of AI-assisted Academic Writing

  • 4 minute read
  • 16.1K views

Table of Contents

Artificial intelligence (AI)-powered writing tools are becoming increasingly popular among researchers. AI tools can improve several important aspects of writing, such as readability, grammar, spelling, and tone, providing authors with a competitive edge when drafting grant proposals and academic articles. In recent years, there has also been an increase in the use of “Generative AI,” which can produce write-ups that appear to have been drafted by humans. However, despite AI’s enormous potential in academic writing, there are several significant pitfalls in its use. 

Inauthentic Sources

AI tools are built on rapidly evolving deep learning algorithms that fetch answers to your queries or “prompts”. Owing to advances in computation, and the rapid growth in the amount of data that algorithms can access, these tools are often accurate in their answers. However, at times AI can make mistakes and give you inaccurate data. What is worrying is, this data may look authentic at a first glance and increase the risk of getting incorporated in research articles. Failing to scrutinise information and data sources provided by AI can therefore impair scientific credibility and trigger a chain of falsification in the research community. 

Why Human Supervision Is Advisable

AI-generated output is frequently generic, matched with synonyms, and may not be able to critically analyse the scientific context when writing manuscripts. 

Consider the following example, where the AI ‘ChatGPT’ was used to generate a one-line summary of the following sentences:

The malaria parasite Plasmodium falciparum has an organelle,the  apicoplast, which contains its own genome.

This organelle is significant in the Plasmodium’s lifecycle, but we are yet to thoroughly understand the regulation of apicoplast gene expression.

The following is a human-generated one-line summary:

The malaria parasite Plasmodium falciparum has an organelle that is significant in its lifecycle called an apicoplast, which contains its own genome —but the regulation of apicoplast gene expression is poorly understood.

On the other hand, the AI-generated summary is as follows:

The malaria parasite Plasmodium falciparum has an apicoplast, an organelle with its own genome , significant in its life cycle , yet its gene expression regulation remains poorly understood.

In the AI-generated text, it is not clear what ‘its’ refers to in each instance of because it could either refer to Plasmodium falciparum or it could refer to the apicoplast. Moreover, while the expression ‘gene expression regulation’ is technically correct, the sentence structure and writing style is superior if you write ‘regulation of gene expression’.

This is why we need humans to supervise AI bots and verify the accuracy of all information submitted for publication. We request that authors who have used AI or AI-assisted tools include a declaration statement at the end of their manuscript where they specify the tool and the reason for using it.

ChatGPT Response Example

An example of AI-generated text using the software ChatGPT

Data Leakage

AI is now an integral part of scientific research. From data collection to manuscript preparation, AI provides ways to improve and expedite every step of the research process. However, to function, AI needs access to data and adequate computing power to process them efficiently. One way in which many AI applications meet these requirements is by having large, distributed databases and dividing the labour among several individual computers. These AI applications need to stay connected to the internet to work. Therefore, researchers who upload academic content from unpublished papers to platforms like ChatGPT are at a higher risk of data leakage and privacy violations.

To address this issue, governments in various countries have decided to implement policies. Italy, for example, banned ChatGPT in April 2023 due to privacy concerns, but later reinstated the AI app with a new privacy policy that verifies users’ ages. The European Union is also developing a new policy that will regulate AI platforms such as ChatGPT and Google Bard. The US Congress and India’s IT department have also hinted at developing new frameworks for AI compliance with safety standards.

Elsevier also strives to minimize the risk of data leakage. Our policy on the use of AI and AI-assisted technologies in scientific writing aims to provide authors, readers, reviewers, editors, and contributors with more transparency and guidance. 

Legal and Ethical Restrictions on Use

Most publishers allow the use of AI writing tools during manuscript preparation as long as it is used to improve, and not wholly generate, sentences. Elsevier’s policy also allows authors to use AI tools to improve the readability and language of their submissions but emphasises that the generated output is ultimately reviewed by the author(s) to avoid mistakes. Moreover, we require authors to keep us informed and acknowledge the use of AI-assisted writing during the submission process. Information regarding this is included in the published article in the interest of transparency. Visit this resource for more details.

We must know that AI programs are not considered authors of a manuscript, and since they do not receive the credit, they also do not bear responsibility. Authors are solely responsible for any mistakes in AI-assisted writing that find their way into manuscripts.

AI-assisted writing is here to stay. While it is advisable to familiarise oneself with AI writing technology, it is equally advisable to be aware of its risks and limitations. 

Need safe and reliable writing assistance? Experts at Elsevier Author Services can assist you in every step of the manuscript preparation process. Contact us for a full list of services and any additional information.

Publishing Biomedical Research

  • Publication Process

Publishing Biomedical Research: What Rules Should You Follow?

Manuscript Writing Patterns and Structure

  • Manuscript Preparation

Path to An Impactful Paper: Common Manuscript Writing Patterns and Structure

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

  • Share full article

Advertisement

Supported by

Guest Essay

Will A.I. Be a Creator or a Destroyer of Worlds?

A hand projects into a swirl made up of the colors of the rainbow.

By Thomas B. Edsall

Mr. Edsall contributes a weekly column from Washington, D.C., on politics, demographics and inequality.

The advent of A.I. — artificial intelligence — is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?

In “ Can We Have Pro-Worker A.I. ? Choosing a Path of Machines in Service of Minds,” three economists at M.I.T., Daron Acemoglu , David Autor and Simon Johnson , looked at this epochal innovation last year:

The private sector in the United States is currently pursuing a path for generative A.I. that emphasizes automation and the displacement of labor, along with intrusive workplace surveillance. As a result, disruptions could lead to a potential downward cascade in wage levels, as well as inefficient productivity gains. Before the advent of artificial intelligence, automation was largely limited to blue-collar and office jobs using digital technologies while more complex and better-paying jobs were left untouched because they require flexibility, judgment and common sense.

Now, Acemoglu, Autor and Johnson wrote, A.I. presents a direct threat to those high-skill jobs: “A major focus of A.I. research is to attain human parity in a vast range of cognitive tasks and, more generally, to achieve ‘artificial general intelligence’ that fully mimics and then surpasses capabilities of the human mind.”

The three economists make the case that

There is no guarantee that the transformative capabilities of generative A.I. will be used for the betterment of work or workers. The bias of the tax code, of the private sector generally, and of the technology sector specifically, leans toward automation over augmentation. But there are also potentially powerful A.I.-based tools that can be used to create new tasks, boosting expertise and productivity across a range of skills. To redirect A.I. development onto the human-complementary path requires changes in the direction of technological innovation, as well as in corporate norms and behavior. This needs to be backed up by the right priorities at the federal level and a broader public understanding of the stakes and the available choices. We know this is a tall order.

“Tall” is an understatement.

In an email elaborating on the A.I. paper, Acemoglu contended that artificial intelligence has the potential to improve employment prospects rather than undermine them:

It is quite possible to leverage generative A.I. as an informational tool that enables various different types of workers to get better at their jobs and perform more complex tasks. If we are able to do this, this would help create good, meaningful jobs, with wage growth potential, and may even reduce inequality. Think of a generative A.I. tool that helps electricians get much better at diagnosing complex problems and troubleshoot them effectively.

This, however, “is not where we are heading,” Acemoglu continued:

The preoccupation of the tech industry is still automation and more automation, and the monetization of data via digital ads. To turn generative A.I. pro-worker, we need a major course correction, and this is not something that’s going to happen by itself.

Acemoglu pointed out that unlike the regional trade shock that decimated manufacturing employment after China entered the World Trade Organization in 2001, “The kinds of tasks impacted by A.I. are much more broadly distributed in the population and also across regions.” In other words, A.I. threatens employment at virtually all levels of the economy, including well-paid jobs requiring complex cognitive capabilities.

Four technology specialists — Tyna Eloundou and Pamela Mishkin , both on the staff of OpenAI , with Sam Manning , a research fellow at the Centre for the Governance of A.I., and Daniel Rock at the University of Pennsylvania — provided a detailed case study on the employment effects of artificial intelligence in their 2023 paper, “ GPTs Are GPTs : An Early Look at the Labor Market Impact Potential of Large Language Models.”

“Around 80 percent of the U.S. work force could have at least 10 percent of their work tasks affected by the introduction of large language models,” Eloundou and her co-authors wrote, and “approximately 19 percent of workers may see at least 50 percent of their tasks impacted.”

Large language models have multiple and diverse uses, according to Eloundou and her colleagues, and “can process and produce various forms of sequential data, including assembly language, protein sequences and chess games, extending beyond natural.” In addition, these models “excel in diverse applications like translation, classification, creative writing, and code generation — capabilities that previously demanded specialized, task-specific models developed by expert engineers using domain-specific data.”

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Home — Essay Samples — Information Science and Technology — Artificial Intelligence — Is Artificial Intelligence Dangerous?

test_template

Is Artificial Intelligence Dangerous?

  • Categories: Artificial Intelligence

About this sample

close

Words: 623 |

Published: Sep 16, 2023

Words: 623 | Page: 1 | 4 min read

Table of contents

The promise of ai, the perceived dangers of ai, responsible ai development.

  • Medical Advancements: AI can assist in diagnosing diseases, analyzing medical data, and developing personalized treatment plans, potentially saving lives and improving healthcare outcomes.
  • Autonomous Vehicles: Self-driving cars, powered by AI, have the potential to reduce accidents and make transportation more accessible and efficient.
  • Environmental Conservation: AI can be used to monitor and address environmental issues, such as climate change, deforestation, and wildlife preservation.
  • Efficiency and Automation: AI-driven automation can streamline processes in various industries, increasing productivity and reducing costs.
  • Job Displacement
  • Bias and Discrimination
  • Lack of Accountability
  • Security Risks
  • Transparency and Accountability
  • Fairness and Bias Mitigation
  • Ethical Frameworks
  • Cybersecurity Measures

This essay delves into the complexities surrounding artificial intelligence (AI), exploring both its transformative benefits and potential dangers. From enhancing healthcare and transportation to posing risks in job displacement and security, it critically assesses AI’s dual aspects. Emphasizing responsible development, it advocates for transparency, fairness, and robust cybersecurity measures. For a deeper understanding, students can check more AI websites for students which offer further resources and expert guidance.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Heisenberg

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 639 words

2 pages / 702 words

5 pages / 2497 words

2 pages / 1025 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Artificial Intelligence

The integration of Artificial Intelligence (AI) into security and warfare has ushered in a new era of technological advancement and strategic capabilities. In this essay, we explore the pivotal role of AI in national security [...]

Artificial Intelligence (AI) is a technology that aims to replicate human intelligence and problem-solving abilities. It has come a long way since its introduction in 1956, overcoming initial skepticism and hardware limitations. [...]

Artificial intelligence growth has exceeded people expectations in the last decade. The purpose of creating an AI system is to make a machine capable of emulating human like functions, and performance. The fast growth of AI has [...]

At the intersection of technological innovation and environmental sustainability, artificial intelligence (AI) has emerged as a powerful catalyst for change in the transportation and renewable energy sectors. AI's ability to [...]

ARTIFICIAL INTELLIGENCE IN MEDICAL TECHNOLOGY What is ARTIFICIAL INTELLIGENCE? The term AI was devised by John McCarthy, an American computer scientist, in 1956. AI or artificial intelligence is the stimulation of human [...]

“Do you like human beings?” Edward asked. “I love them” Sophia replied. “Why?” “I am not sure I understand why yet” The conversation above is from an interview for Business Insider between a journalist [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

artificial intelligence is not dangerous essay

MIT Technology Review

  • Newsletters

The true dangers of AI are closer than we think

Forget superintelligent AI: algorithms are already creating real harm. The good news: the fight back has begun.

  • Karen Hao archive page

william isaac

As long as humans have built machines, we’ve feared the day they could destroy us. Stephen Hawking famously warned that AI could spell an end to civilization. But to many AI researchers, these conversations feel unmoored. It’s not that they don’t fear AI running amok—it’s that they see it already happening, just not in the ways most people would expect. 

AI is now screening job candidates, diagnosing disease, and identifying criminal suspects. But instead of making these decisions more efficient or fair, it’s often perpetuating the biases of the humans on whose decisions it was trained. 

William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014. He also co-chairs the Fairness, Accountability, and Transparency conference—the premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI development—as well as the solutions.

Q: Should we be worried about superintelligent AI?

A: I want to shift the question. The threats overlap, whether it’s predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history. So potential risks and ways to approach them are not as abstract as we think.

There are three areas that I want to flag. Probably the most pressing one is this question about value alignment: how do you actually design a system that can understand and implement the various forms of preferences and values of a population? In the past few years we’ve seen attempts by policymakers, industry, and others to try to embed values into technical systems at scale—in areas like predictive policing, risk assessments, hiring, etc. It’s clear that they exhibit some form of bias that reflects society. The ideal system would balance out all the needs of many stakeholders and many people in the population. But how does society reconcile their own history with aspiration? We’re still struggling with the answers, and that question is going to get exponentially more complicated. Getting that problem right is not just something for the future, but for the here and now.

The second one would be achieving demonstrable social benefit. Up to this point there are still few pieces of empirical evidence that validate that AI technologies will achieve the broad-based social benefit that we aspire to. 

Lastly, I think the biggest one that anyone who works in the space is concerned about is: what are the robust mechanisms of oversight and accountability. 

Q: How do we overcome these risks and challenges?

A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and oversight. Make sure you’re thinking about where the forms of misalignment or bias or harm exist. Make sure you develop good processes for how you ensure that all groups are engaged in the process of technological design. Groups that have been historically marginalized are often not the ones that get their needs met. So how we design processes to actually do that is important.

The second one is accelerating the development of the sociotechnical tools to actually do this work. We don’t have a whole lot of tools. 

The last one is providing more funding and training for researchers and practitioners—particularly researchers and practitioners of color—to conduct this work. Not just in machine learning, but also in STS [science, technology, and society] and the social sciences. We want to not just have a few individuals but a community of researchers to really understand the range of potential harms that AI systems pose, and how to successfully mitigate them.

Q: How far have AI researchers come in thinking about these challenges, and how far do they still have to go?

A: In 2016, I remember, the White House had just come out with a big data report, and there was a strong sense of optimism that we could use data and machine learning to solve some intractable social problems. Simultaneously, there were researchers in the academic community who had been flagging in a very abstract sense: “Hey, there are some potential harms that could be done through these systems.” But they largely had not interacted at all. They existed in unique silos.

Since then, we’ve just had a lot more research targeting this intersection between known flaws within machine-learning systems and their application to society. And once people began to see that interplay, they realized: “Okay, this is not just a hypothetical risk. It is a real threat.” So if you view the field in phases, phase one was very much highlighting and surfacing that these concerns are real. The second phase now is beginning to grapple with broader systemic questions.

Q: So are you optimistic about achieving broad-based beneficial AI?

A: I am. The past few years have given me a lot of hope. Look at facial recognition as an example. There was the great work by Joy Buolamwini, Timnit Gebru, and Deb Raji in surfacing intersectional disparities in accuracies across facial recognition systems [i.e., showing these systems were far less accurate on Black female faces than white male ones]. There’s the advocacy that happened in civil society to mount a rigorous defense of human rights against misapplication of facial recognition. And also the great work that policymakers, regulators, and community groups from the grassroots up were doing to communicate exactly what facial recognition systems were and what potential risks they posed, and to demand clarity on what the benefits to society would be. That’s a model of how we could imagine engaging with other advances in AI.

But the challenge with facial recognition is we had to adjudicate these ethical and values questions while we were publicly deploying the technology. In the future, I hope that some of these conversations happen before the potential harms emerge.

Q: What do you dream about when you dream about the future of AI?

A: It could be a great equalizer. Like if you had AI teachers or tutors that could be available to students and communities where access to education and resources is very limited, that’d be very empowering. And that’s a nontrivial thing to want from this technology. How do you know it’s empowering? How do you know it’s socially beneficial? 

I went to graduate school in Michigan during the Flint water crisis. When the initial incidences of lead pipes emerged, the records they had for where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technologies had put them at a significant disadvantage. It means the people who grew up in those communities, over 50% of whom are African-American, grew up in an environment where they don’t get basic services and resources.

Artificial intelligence

An ai startup made a hyperrealistic deepfake of me that’s so good it’s scary.

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

  • Melissa Heikkilä archive page

This AI-powered “black box” could make surgery safer

A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.

  • Simar Bajaj archive page

What I learned from the UN’s “AI for Good” summit

OpenAI’s CEO Sam Altman was the star speaker of the summit.

Propagandists are using AI too—and companies need to be open about it

OpenAI has reported on influence operations that use its AI tools. Such reporting, alongside data sharing, should become the industry norm.

  • Josh A. Goldstein archive page
  • Renée DiResta archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

12 Risks and Dangers of Artificial Intelligence (AI)

AI has been hailed as revolutionary and world-changing, but it’s not without drawbacks.

Mike Thomas

As AI grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder.

“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening,” said Geoffrey Hinton , known as the “Godfather of AI” for his foundational work on machine learning and neural network algorithms. In 2023, Hinton left his position at Google so that he could “ talk about the dangers of AI ,” noting a part of him even regrets his life’s work .

The renowned computer scientist isn’t alone in his concerns.

Tesla and SpaceX founder Elon Musk, along with over 1,000 other tech leaders, urged in a 2023 open letter to put a pause on large AI experiments, citing that the technology can “pose profound risks to society and humanity.”

Dangers of Artificial Intelligence

  • Automation-spurred job loss
  • Privacy violations
  • Algorithmic bias caused by bad data
  • Socioeconomic inequality
  • Market volatility
  • Weapons automatization
  • Uncontrollable self-aware AI

Whether it’s the increasing automation of certain jobs , gender and racially biased algorithms or autonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of.

12 Dangers of AI

Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.

Is AI Dangerous?

The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.

1. Lack of AI Transparency and Explainability 

AI and deep learning models can be difficult to understand, even for those that work directly with the technology . This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of explainable AI , but there’s still a long way before transparent AI systems become common practice.

2. Job Losses Due to AI Automation

AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing , manufacturing and healthcare . By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change — according to McKinsey . Goldman Sachs even states 300 million full-time jobs could be lost to AI automation.

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise, though, “I don’t think that’s going to continue.”

As AI robots become smarter and more dexterous, the same tasks will require fewer humans. And while AI is estimated to create 97 million new jobs by 2025 , many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t upskill their workforces .

“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those are the things that, at least so far, computers are not very good at.”

Even professions that require graduate degrees and additional post-college training aren’t immune to AI displacement.

As technology strategist Chris Messina has pointed out, fields like law and accounting are primed for an AI takeover. In fact, Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a massive shakeup.”

“Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure,” he said in regards to the legal field. “It’s a lot of attorneys reading through a lot of information — hundreds or thousands of pages of data and documents. It’s really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.”

More on Artificial Intelligence AI Copywriting: Why Writing Jobs Are Safe

3. Social Manipulation Through AI Algorithms

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election. 

TikTok, which is just one example of a social media platform that relies on AI algorithms , fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns over TikTok’s ability to protect its users from misleading information. 

Online media and news have become even murkier in light of AI-generated images and videos, AI voice changers as well as deepfakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, audio clips or replace the image of one figure with another in an existing picture or video. As a result, bad actors have another avenue for sharing misinformation and war propaganda , creating a nightmare scenario where it can be nearly impossible to distinguish between creditable and faulty news. 

“No one knows what’s real and what’s not,” Ford said. “So it really leads to a situation where you literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence... That’s going to be a huge issue.”

More on Artificial Intelligence How to Spot Deepfake Technology

4. Social Surveillance With AI Technology

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example is China’s use of facial recognition technology in offices, schools and other venues. Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views. 

Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which disproportionately impact Black communities . Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.

“Authoritarian regimes use or are going to use it,” Ford said. “The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?”

Related Are Police Robots the Future of Law Enforcement?

5. Lack of Data Privacy Using AI Tools

If you’ve played around with an AI chatbot or tried out an AI face filter online, your data is being collected — but where is it going and how is it being used? AI systems often collect personal data to customize user experiences or to help train the AI models you’re using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred with ChatGPT in 2023 “ allowed some users to see titles from another active user’s chat history .” While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm experienced by AI.

6. Biases Due to AI

Various forms of AI bias are detrimental too. Speaking to the New York Times , Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race . In addition to data and algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and humans are inherently biased .

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

The limited experiences of AI creators may explain why speech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of a chatbot impersonating notorious figures in human history. Developers and businesses should exercise greater care to avoid recreating powerful biases and prejudices that put minority populations at risk.

7. Socioeconomic Inequality as a Result of AI 

If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their DEI initiatives through AI-powered recruiting . The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same discriminatory hiring practices businesses claim to be eliminating.  

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. Workers who perform more manual, repetitive tasks have experienced wage declines as high as 70 percent because of automation, with office and desk workers remaining largely untouched in AI’s early stages. However, the increase in generative AI use is already affecting office jobs , making for a wide range of roles that may be more vulnerable to wage or job loss than others.

Sweeping claims that AI has somehow overcome social boundaries or created more jobs fail to paint a complete picture of its effects. It’s crucial to account for differences based on race, class and other categories. Otherwise, discerning how AI and automation benefit certain individuals and groups at the expense of others becomes more difficult.

8. Weakening Ethics and Goodwill Because of AI

Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI’s potential pitfalls. In a 2023 Vatican meeting and in his message for the 2024 World Day of Peace , Pope Francis called for nations to create and adopt a binding international treaty that regulates the development and use of AI.

Pope Francis warned against AI’s ability to be misused, and “create statements that at first glance appear plausible but are unfounded or betray biases.” He stressed how this could bolster campaigns of disinformation, distrust in communications media, interference in elections and more — ultimately increasing the risk of “fueling conflicts and hindering peace.” 

The rapid rise of generative AI tools gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. Plus, biased AI could be used to determine whether an individual is suitable for a job, mortgage, social assistance or political asylum, producing possible injustices and discrimination, noted Pope Francis. 

“The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms,” he said. “And that capacity cannot be reduced to programming a machine.”

More on Artificial Intelligence What Are AI Ethics?

9. Autonomous Weapons Powered By AI

As is too often the case, technological advancements have been harnessed for the purpose of warfare. When it comes to AI, some are keen to do something about it before it’s too late: In a 2016 open letter , over 30,000 individuals, including AI and robotics researchers, pushed back against the investment in AI-fueled autonomous weapons. 

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

This prediction has come to fruition in the form of Lethal Autonomous Weapon Systems , which locate and destroy targets on their own while abiding by few regulations. Because of the proliferation of potent and complex weapons, some of the world’s most powerful nations have given in to anxieties and contributed to a tech cold war .  

Many of these new weapons pose major risks to civilians on the ground, but the danger becomes amplified when autonomous weapons fall into the wrong hands. Hackers have mastered various types of cyber attacks , so it’s not hard to imagine a malicious actor infiltrating autonomous weapons and instigating absolute armageddon.  

If political rivalries and warmongering tendencies are not kept in check, artificial intelligence could end up being applied with the worst intentions. Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we’re going to keep pushing the envelope with it if there’s money to be made.   

“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”

10. Financial Crises Brought About By AI Algorithms

The financial industry has become more receptive to AI technology’s involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets.

While AI algorithms aren’t clouded by human judgment or emotions, they also don’t take into account contexts , the interconnectedness of markets and factors like human trust and fear. These algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility.

Instances like the 2010 Flash Crash and the Knight Capital Flash Crash serve as reminders of what could happen when trade-happy algorithms go berserk, regardless of whether rapid and massive trading is intentional.  

This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help investors make smarter and more informed decisions on the market. But finance organizations need to make sure they understand their AI algorithms and how those algorithms make decisions. Companies should consider whether AI raises or lowers their confidence before introducing the technology to avoid stoking fears among investors and creating financial chaos.

11. Loss of Human Influence

An overreliance on AI technology could result in the loss of human influence — and a lack in human functioning — in some parts of society. Using AI in healthcare could result in reduced human empathy and reasoning , for instance. And applying generative AI for creative endeavors could diminish human creativity and emotional expression . Interacting with AI systems too much could even cause reduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community.

12. Uncontrollable Self-Aware AI

There also comes a worry that AI will progress in intelligence so rapidly that it will become sentient , and act beyond humans’ control — possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbot LaMDA was sentient and speaking to him just as a person would. As AI’s next big milestones involve making systems with artificial general intelligence , and eventually artificial superintelligence , cries to completely stop these developments continue to rise .

More on Artificial Intelligence What Is the Eliza Effect?

How to Mitigate the Risks of AI

AI still has numerous benefits , like organizing health data and powering self-driving cars. To get the most out of this promising technology, though, some argue that plenty of regulation is necessary.

“There’s a serious danger that we’ll get [AI systems] smarter than us fairly soon and that these things might get bad motives and take control,” Hinton told NPR . “This isn’t just a science fiction problem. This is a serious problem that’s probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now.”

Develop Legal Regulations

AI regulation has been a main focus for dozens of countries , and now the U.S. and European Union are creating more clear-cut measures to manage the rising sophistication of artificial intelligence. In fact, the White House Office of Science and Technology Policy (OSTP) published the AI Bill of Rights in 2022, a document outlining to help responsibly guide AI use and development. Additionally, President Joe Biden issued an executive order in 2023 requiring federal agencies to develop new rules and guidelines for AI safety and security.

Although legal regulations mean certain AI technologies could eventually be banned, it doesn’t prevent societies from exploring the field.  

Ford argues that AI is essential for countries looking to innovate and keep up with the rest of the world.

“You regulate the way AI is used, but you don’t hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous,” Ford said. “We decide where we want AI and where we don’t; where it’s acceptable and where it’s not. And different countries are going to make different choices.”

More on Artificial Intelligence Will This Election Year Be a Turning Point for AI Regulation?

Establish Organizational AI Standards and Discussions

On a company level, there are many steps businesses can take when integrating AI into their operations. Organizations can develop processes for monitoring algorithms, compiling high-quality data and explaining the findings of AI algorithms. Leaders could even make AI a part of their company culture and routine business discussions, establishing standards to determine acceptable AI technologies.

Guide Tech With Humanities Perspectives

Though when it comes to society as a whole, there should be a greater push for tech to embrace the diverse perspectives of the humanities . Stanford University AI researchers Fei-Fei Li and John Etchemendy make this argument in a 2019 blog post that calls for national and global leadership in regulating artificial intelligence:   

“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS).”

Balancing high-tech innovation with human-centered thinking is an ideal method for producing responsible AI technology and ensuring the future of AI remains hopeful for the next generation. The dangers of artificial intelligence should always be a topic of discussion, so leaders can figure out ways to wield the technology for noble purposes. 

“I think we can talk about all these risks, and they’re very real,” Ford said. “But AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face.”

Frequently Asked Questions

What is ai.

AI (artificial intelligence) describes a machine's ability to perform tasks and mimic intelligence at a similar level as humans.

Is AI dangerous?

AI has the potential to be dangerous, but these dangers may be mitigated by implementing legal regulations and by guiding AI development with human-centered thinking.

Can AI cause human extinction?

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What happens if AI becomes self-aware?

Self-aware AI has yet to be created, so it is not fully known what will happen if or when this development occurs.

Some suggest self-aware AI may become a helpful counterpart to humans in everyday living, while others suggest that it may act beyond human control and purposely harm humans.

Hal Koss and Matthew Urwin contributed reporting to this story.

Recent Data Science Articles

SQL Pivot: A Tutorial

  • Future Perfect

The case for taking AI seriously as a threat to humanity

Why some people fear AI, explained.

by Kelsey Piper

If you buy something from a Vox link, Vox Media may earn a commission. See our ethics statement.

An illustration of a human and gears in their head.

Stephen Hawking has said , “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “ biggest existential threat .”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at  Oxford  and   UC Berkeley and  many of the researchers  working in AI today believe that advanced AI systems, if deployed carelessly, could permanently cut off human civilization from a good future.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic danger, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation , at games like chess and Go , at important research biology questions like predicting how proteins fold , and at generating images . AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed . They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy games . They are being developed to improve drone targeting and detect missiles .

But narrow AI is getting less narrow . Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches .

And as computers get good enough at narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAI’s famous GPT-series of text AIs is, in one sense, the narrowest of narrow AIs — it just predicts what the next word will be in a text, based on the previous words and its corpus of human language. And yet, it can now identify questions as reasonable or unreasonable and discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first). In order to be very good at the narrow task of text prediction, an AI system will eventually develop abilities that are not narrow at all.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too . Making websites more addictive can be great for your revenue but bad for your users. Releasing a program that writes convincing fake reviews or fake news might make those widespread, making it harder for the truth to get out.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

artificial intelligence is not dangerous essay

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “ everything that’s easy is hard, and everything that’s hard is easy .” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We are just beginning to learn how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars , which are still mediocre under the best conditions despite the billions that have been poured into making them work.

It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.

Other researchers argue that the day may not be so distant after all.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play strategy games , generate fake photos of celebrities , fold proteins , and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling . Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates , we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

And deep learning, unlike previous approaches to AI, is highly suited to developing general capabilities.

“If you go back in history,” top AI researcher and OpenAI cofounder Ilya Sutskever told me , “they made a lot of cool demos with little symbolic AI. They could never scale them up — they were never able to get them to solve non-toy problems. Now with  deep learning  the situation is reversed. ... Not only is [the AI we’re developing] general, it’s also competent — if you want to get the best results on many hard problems, you must use deep learning. And it’s scalable.” 

In other words, we didn’t need to worry about general AI back when winning at chess required entirely different techniques than winning at Go. But now, the same approach produces fake news or music depending on what training data it is fed. And as far as we can discover, the programs just keep getting better at what they do when they’re allowed more computation time — we haven’t discovered a limit to how good they can get. Deep learning approaches to most problems blew past all other approaches when deep learning was first discovered.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

Learn about the smart ways people are fixing the world’s problems. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good. Sign up for the Future Perfect newsletter .

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965 : “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could AI wipe us out?

It’s immediately clear how nuclear bombs will kill us . No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

artificial intelligence is not dangerous essay

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

It is easy to design an AI that averts that specific pitfall. But there are lots of ways that unleashing powerful computer systems will have unexpected and potentially devastating effects, and avoiding all of them is a much harder problem than avoiding any specific one.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming” : the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear , thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items .

Sometimes, the researchers didn’t even know how their AI system cheated : “the agent discovers an in-game bug. ... For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro , who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. ... There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton . In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) ... began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program . He researches risks to humanity , both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

artificial intelligence is not dangerous essay

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe , and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “ No, experts don’t think superintelligent AI is a threat to humanity ,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “ Yes, we are worried about the existential risk of artificial intelligence ,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety . “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it . There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out . But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen. AI researchers want to make their AI systems more capable — that’s what makes them more scientifically interesting and more profitable. It’s not clear that the many incentives to make your systems powerful and use them online will suddenly change once systems become powerful enough to be dangerous.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and organizations like Elon-Musk-founded OpenAI, which recently transitioned to a hybrid for-profit/non-profit structure .

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI , and China has made big investments . Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor , whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper in 2018 reviewing the state of the field .

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance : the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI , on the context of China’s AI strategy, and on artificial intelligence and international security .

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017-2019.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “ concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems .

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here . “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias , robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets , to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries ; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

artificial intelligence is not dangerous essay

There’s intense disagreement in the field on timelines for critical advances in AI . While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction . But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default . They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. Success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind . “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket : something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

More in this stream

OpenAI insiders are demanding a “right to warn” the public 

OpenAI insiders are demanding a “right to warn” the public 

The double sexism of ChatGPT’s flirty “Her” voice

The double sexism of ChatGPT’s flirty “Her” voice

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Most popular, why europe is lurching to the right, justices sotomayor and kagan must retire now, business owners are buying into a bogus myth about driving, take a mental break with the newest vox crossword, elephants have names — and they use them with each other, today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

More in Future Perfect

World leaders neglected this crisis. Now genocide looms.

World leaders neglected this crisis. Now genocide looms.

Elephants have names — and they use them with each other

This is your kid on smartphones

Where AI predictions go wrong

Where AI predictions go wrong

What if you could have a panic attack, but for joy?

What if you could have a panic attack, but for joy?

World leaders neglected this crisis. Now genocide looms.

People can’t stop being weird about Caitlin Clark

Megan Thee Stallion and the growing scourge of sexually explicit deepfakes

Megan Thee Stallion and the growing scourge of sexually explicit deepfakes

Apple’s convincing case that AI doesn’t have to be scary

Apple’s convincing case that AI doesn’t have to be scary

Vox Announces Media Partnership With Institute for the Study of Modern Authoritarianism for Their Inaugural Conference: “Liberalism for the 21st Century”

Vox Announces Media Partnership With Institute for the Study of Modern Authoritarianism for Their Inaugural Conference: “Liberalism for the 21st Century”

Hunter Biden’s gun conviction, briefly explained

Hunter Biden’s gun conviction, briefly explained

Business owners are buying into a bogus myth about driving

Artificial Intelligence (AI) — Top 3 Pros and Cons

Cite this page using APA, MLA, Chicago, and Turabian style guides

Artificial intelligence (AI) is the use of “computers and machines to mimic the problem-solving and decision-making capabilities of the human mind,” according to IBM. [ 1 ]

The idea of AI dates back at least 2,700 years. As Adrienne Mayor, research scholar, folklorist, and science historian at Stanford University, explains: “Our ability to imagine artificial intelligence goes back to ancient times. Long before technological advances made self-moving devices possible, ideas about creating artificial life and robots were explored in ancient myths.” [ 2 ]

Mayor notes that the myths about Hephaestus , the Greek god of invention and blacksmithing, included precursors to AI. For example, Hephaestus created the giant bronze man, Talos, which had a mysterious life force from the gods called ichor . Hephaestus also created Pandora and her infamous box, as well as a set of automated servants made of gold that were given the knowledge of the gods. Mayor concludes, “Not one of those myths has a good ending once the artificial beings are sent to Earth. It’s almost as if the myths say that it’s great to have these artificial things up in heaven used by the gods. But once they interact with humans, we get chaos and destruction.” [ 2 ]

The modern notion of AI largely began when Alan Turing , who contributed to breaking the Nazi’s Enigma code during World War II, created the Turing test to determine if a computer is capable of “thinking.” The value and legitimacy of the test have long been debated. [ 1 ] [ 3 ] [ 4 ]

The “Father of Artificial Intelligence,” John McCarthy , coined the term “artificial intelligence” when he, with Marvin Minsky and Claude Shannon, proposed a 1956 summer workshop on the topic at Dartmouth College. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines.” He later created the computer programming language LISP (which is still used in AI), hosted computer chess games against human Russian opponents, and developed the first computer with ”hand-eye” capability, all important building blocks for AI. [ 1 ] [ 5 ] [ 6 ] [ 7 ]

The first AI program designed to mimic how humans solve problems, Logic Theorist, was created by Allen Newell , J.C. Shaw, and Herbert Simon in 1955-1956. The program was designed to solve problems from Principia Mathematica (1910-13) written by Alfred North Whitehead and Bertrand Russell . [ 1 ] [ 8 ]

In 1958, Frank Rosenblatt invented the Perceptron , which he claimed was “the first machine which is capable of having an original idea.” Though the machine was hounded by skeptics, it was later praised as the “foundations for all of this artificial intelligence.” [ 1 ] [ 9 ]

As computers became cheaper in the 1960s and 70s, AI programs such as Joseph Weizenbaum’s ELIZA flourished, and U.S. government agencies including the Defense Advanced Research Projects Agency (DARPA) began to fund AI-related research. But computers were still too weak to manage the language tasks researchers asked of them. Another influx of funding in the 1980s and early 90s furthered the research, including the invention of expert systems by Edward Feigenbaum and Joshua Lederberg . But progress again waned with another drop in government funding. [ 10 ]

In 1997, Gary Kasparov , reigning world chess champion and grand master, was defeated by IBM’s Deep Blue AI computer program, a major event in AI history. More recently, advances in computer storage limits and speeds have opened new avenues for AI research and implementation, aiding scientific research and forging new paths in medicine for patient diagnosis, robotic surgery, and drug development. [ 1 ] [ 10 ] [ 11 ] [ 12 ]

Now, artificial intelligence is used for a variety of everyday implementations including facial recognition software, online shopping algorithms, search engines, digital assistants like Siri and Alexa, translation services, automated safety functions on cars, cybersecurity, airport body scanning security, poker playing strategy, and fighting disinformation on social media. [ 13 ] [ 58 ]

With the field growing by leaps and bounds, on Mar. 29, 2023, tech giants including Elon Musk ,   Steve Wozniak , Craig Peters (CEO of Getty Images), author Yuval Noah Harari , and politician Andrew Yang published an open letter calling for a six-month pause on AI “systems more powerful than GPT-4.” (The latter, “Generative Pre-trained Transformer 4,” is an AI model that can generate human-like text and images.) The letter states, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable…. AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” Within a day of its release, the letter had garnered 1380 signatures—from engineers, professors, artists, and grandmothers alike. [ 59 ] [ 62 ]

On Oct. 30, 2023, President Joe Biden signed an executive order on artificial intelligence that “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” Vice President Kamala Harris stated, “We have a moral, ethical and societal duty to make sure that A.I. is adopted and advanced in a way that protects the public from potential harm. We intend that the actions we are taking domestically will serve as a model for international action.” [ 60 ] [ 61 ]

Despite such precautions, experts noted that many of the new standards would be difficult to enforce, especially as new concerns and controversies over AI evolve almost daily. AI developers, for example, have faced criticism for using copyrighted work to train AI models and for politically skewing AI-produced information. Generative programs such as ChatGPT and DALL-E3 claim to produce “original” output because developers have exposed the programs to huge databases of existing texts and images,material that consists of copyrighted works. OpenAI and Anthropic, as well as other AI companies, have been sued by The New York Times , Microsoft , countless authors including Jodi Picoult, George R.R. Martin , Sarah Silverman , and John Grisham , music publishers including Universal Music Publishing Group, and numerous visual artists as well as Getty Images, among others. Many companies’ terms of service, including Encyclopaedia Britannica, now require that AI companies obtain written permission to data mine for AI bot training.  [ 63 ] [ 64 ] [ 65 ] [ 66 ] [ 67 ] [ 68 ] [ 69 ] [ 70 ]

Controversy arose yet again in early 2024, when Google’s AI chatbot Gemini began skewing historical events by generating images of racially diverse 1940s German Nazi soldiers and Catholic popes (including a Black female pope). Republican lawmakers accused Google of promoting leftist ideology and spreading disinformation through its AI tool. Globally, fears have been expressed that such technology could undermine the democratic process in upcoming elections. As a result, Google agreed to correct its faulty historical imaging and to limit election-related queries in countries with forthcoming elections. Similarly, the FCC ( Federal Communications Commission ) outlawed the use of AI-generated voices in robocalls after a New Hampshire political group was found to be placing robocalls featuring an AI-generated voice that mimicked President Joe Biden in an effort to suppress Democratic party primary voting. [ 71 ] [ 72 ] [ 73 ] [ 74 ] [ 75 ] [ 76 ] [ 77 ]

Is Artificial Intelligence Good for Society?

Pro 1 AI can make everyday life more convenient and enjoyable, improving our health and standard of living. Why sit in a traffic jam when a map app can navigate you around the car accident? Why fumble with shopping bags searching for your keys in the dark when a preset location-based command can have your doorway illuminated as you approach your now unlocked door? [ 23 ] Why scroll through hundreds of possible TV shows when the streaming app already knows what genres you like? Why forget eggs at the grocery store when a digital assistant can take an inventory of your refrigerator and add them to your grocery list and have them delivered to your home? All of these marvels are assisted by AI technology. [ 23 ] AI-enabled fitness apps boomed during the COVID-19 pandemic when gyms were closed, increasing the number of AI options for at-home workouts. Now, you can not only set a daily steps goal with encouragement reminders on your smart watch, but you can ride virtually through the countryside on a Peloton bike from your garage or have a personal trainer on your living room TV. For more specialized fitness, AI wearables can monitor yoga poses or golf and baseball swings. [ 24 ] [ 25 ] AI can even enhance your doctor’s appointments and medical procedures. It can alert medical caregivers to patterns in your health data as compared to the vast library of medical data, while also doing the paperwork tied to medical appointments so doctors have more time to focus on their patients, resulting in more personalized care. AI can even help surgeons be quicker, more accurate, and more minimally invasive in their operations. [ 26 ] Smart speakers including Amazon’s Echo can use AI to soothe babies to sleep and monitor their breathing. Using AI, speakers can also detect regular and irregular heartbeats, as well as heart attacks and congestive heart failure. [ 27 ] [ 28 ] [ 29 ] Read More
Pro 2 AI makes work easier for students and professionals alike. Much like a calculator did not signal the end of students’ grasp of mathematics knowledge, typing did not eliminate handwriting, and Google did not herald the end of research skills, AI does not signal the end of reading and writing, or education in general. [ 78 ] [ 79 ] Elementary teacher Shannon Morris explains that AI tools like “ChatGPT can help students by providing real-time answers to their questions, engaging them in personalized conversations, and providing customized content based on their interests. It can also offer personalized learning resources, videos, articles, and interactive activities. This resource can even provide personalized recommendations for studying, help with research, provide context-specific answers, and offer educational games.” She also notes that teachers’ more daunting tasks like grading and making vocabulary lists can be streamlined with AI tools. [ 79 ] For adults, AI can similarly make work easier and more efficient, rather than signaling the rise of the robot employee. Pesky, time-consuming tasks like scheduling and managing meetings, finding important emails amongst the spam, prioritizing tasks for the day, and creating and posting social media content can be delegated to AI, freeing up time for more important and rewarding work. The technology can also help with brainstorming, understanding difficult concepts, finding errors in code, and learning languages via conversation, making daunting tasks more manageable. [ 80 ] AI is a tool that, if used responsibly, can enhance both learning and work for everyone. Carri Spector of the Stanford Graduate School of Education says, “I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.” [ 81 ] Read More
Pro 3 AI helps minorities by offering accessibility for people with disabilities. Artificial intelligence is commonly integrated into smartphones and other household devices. Virtual assistants, including Siri, Alexa, and Cortana, can perform innumerable tasks from making a phone call to navigating the internet. People who are deaf and hearing impaired can access transcripts of voicemails or other audio, for example. [ 20 ] Other virtual assistants can transcribe conversations as they happen, allowing for more comprehension and participation by those who are communicationally challenged. Using voice commands with virtual assistants can allow better use by people with dexterity disabilities who may have difficulty navigating small buttons or screens or turning on a lamp. [ 20 ] Apps enabled by AI on smartphones and other devices, including VoiceOver and TalkBack, can read messages, describe app icons or images, and give information such as battery levels for visually impaired people. Other apps, such as Voiceitt, can transcribe and standardize the voices of people with speech impediments. [ 20 ] Wheelmap provides users with information about wheelchair accessibility. And Evelity offers indoor navigation tools that are customized to the user’s needs, providing audio or text instructions and routes for wheelchair accessibility. [ 20 ] Other AI implementations such as smart thermostats, smart lighting, and smart plugs can be automated to work on a schedule to aid people with mobility or cognitive disabilities to lead more independent lives. [ 21 ] More advanced AI projects can combine with robotics to help physically disabled people. HOOBOX Robotics, for example, uses facial recognition software to allow a wheelchair user to move the wheelchair with facial expressions, making movement easier for seniors and those with ALS or quadriparesis. [ 22 ] Read More
Pro 4 Artificial intelligence can improve workplace safety. AI doesn’t get stressed, tired, or sick, three major causes of human accidents in the workplace. AI robots can collaborate with or replace humans for especially dangerous tasks. For example, 50% of construction companies that used drones to inspect roofs and other risky tasks saw improvements in safety. [ 14 ] [ 15 ] Artificial intelligence can also help humans be safer. For instance, AI can ensure employees are up to date on training by tracking and automatically scheduling safety or other training. AI can also check and offer corrections for ergonomics to prevent repetitive stress injuries or worse. [ 16 ] An AI program called AI-SAFE (Automated Intelligent System for Assuring Safe Working Environments) aims to automate the workplace personal protective equipment (PPE) check, eliminating human errors that could cause accidents in the workplace. As more people wear PPE to prevent the spread of COVID-19 and other viruses, this sort of AI could protect against large-scale outbreaks. [ 17 ] [ 18 ] [ 19 ] In India, AI was used in the midst of the coronavirus pandemic to reopen factories safely by providing camera, cell phone, and smart wearable device-based technology to ensure social distancing, take employee temperatures at regular intervals, and perform contact tracing if anyone tested positive for the virus. [ 18 ] [ 19 ] AI can also perform more sensitive tasks in the workplace such as scanning work emails for improper behavior and types of harassment. [ 15 ] Read More
Con 1 AI will harm the standard of living for many people by causing mass unemployment as robots replace people. AI robots and other software and hardware are becoming less expensive and need none of the benefits and services required by human workers, such as sick days, lunch hours, bathroom breaks, health insurance, pay raises, promotions, and performance reviews, which spells trouble for workers and society at large. [ 51 ] 48% of experts believed AI will replace a large number of blue- and even white-collar jobs, creating greater income inequality, increased unemployment, and a breakdown of the social order. [ 35 ] The axiom “everything that can be automated, will be automated” is no longer science fiction. Self-checkout kiosks in stores like CVS, Target, and Walmart use AI-assisted video and scanners to prevent theft, alert staff to suspicious transactions, predict shopping trends, and mitigate sticking points at checkout. These AI-enabled machines have displaced human cashiers. About 11,000 retail jobs were lost in 2019, largely due to self-checkout and other technologies. In 2020, during the COVID-19 pandemic, a self-checkout manufacturer shipped 25% more units globally, reflecting the more than 70% of American grocery shoppers who preferred self- or touchless checkouts. [ 35 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ] An Oct. 2020 World Economic Forum report found 43% of businesses surveyed planned to reduce workforces in favor of automation. Many businesses, especially fast-food restaurants, retail shops, and hotels, automated jobs during the COVID-19 pandemic. [ 35 ] Income inequality was exacerbated over the last four decades as 50-70% of changes in American paychecks were caused by wage decreases for workers whose industries experienced rapid automation, including AI technologies. [ 56 ] [ 57 ] Read More
Con 2 AI can be easily politicized, spurring disinformation and cultural laziness. The idea that the Internet is making us stupid is legitimate, and AI is like the Internet on steroids. With AI bots doing everything from research to writing papers, from basic math to logic problems, from generating hypotheses to performing science experiments, from editing photos to creating “original” art, students of all ages will be tempted (and many will succumb to the temptation) to use AI for their school work, undermining education goals. [ 82 ] [ 83 ] [ 84 ] [ 85 ] [ 86 ] “The academic struggle for students is what pushes them to become better writers, thinkers and doers. Like most positive outcomes in life, the important part is the journey. Soon, getting college degrees without AI assistance will be as foreign to the next generation as payphones and Blockbuster [are to the current generation], and they will suffer for it,” says Mark Massaro, professor of English at Florida SouthWestern State College. [ 83 ] A June 2023 study found increased use of AI correlates with increased student laziness due to a loss of human decision-making. Similarly, an Oct. 2023 study found increased laziness and carelessness as well as a decline in work quality when humans worked alongside AI robots. [ 87 ] [ 88 ] [ 89 ] The implications of allowing AI to complete tasks are enormous. We will see declines in work quality and human motivation as well as the rise of dangerous situations from deadly workplace accidents to George Orwell’s dreaded “ groupthink .” And, when humans have become too lazy to program the technology, we’ll see lazy AI, too. [ 90 ] Google’s AI chatbot Gemini even generated politically motivated, historical inaccuracies by inserting people of color into historical events they never participated in, further damaging historical literacy. “An overreliance on technology will further sever the American public from determining truth from lies, information from propaganda, a critical skill that is slowly becoming a lost art, leaving the population willfully ignorant and intellectually lazy,” explains Massaro. [ 73 ] [ 83 ] Read More
Con 3 AI hurts minorities by repeating and exacerbating human racism. Facial recognition has been found to be racially biased, easily recognizing the faces of white men while wrongly identifying Black women 35% of the time. One study of Amazon’s Rekognition AI program falsely matched 28 members of the U.S. Congress with mugshots from a criminal database, with 40% of the errors being people of color. [ 22 ] [ 36 ] [ 43 ] [ 44 ] AI has also been disproportionately employed against black and brown communities, with more federal and local police surveillance cameras in neighborhoods of color, and more social media surveillance of Black Lives Matter and other Black activists. The same technologies are used for housing and employment decisions and TSA airport screenings. Some cities, including Boston and San Francisco, have banned police use of facial recognition for these reasons. [ 36 ] [ 43 ] One particular AI software tasked with predicting recidivism risk for U.S. courts–the Correctional Offender Management Profiling for Alternative Sanctions (Compas)–was found to falsely label Black defendants as high risk at twice the rate of white defenders, and to falsely label white defendants as low risk more often. AI is also incapable of distinguishing between when the N-word is being used as a slur and when it’s being used culturally by a Black person. [ 45 ] [ 46 ] In China, facial recognition AI has been used to track Uyghurs, a largely Muslim minority. The U.S. and other governments have accused the Chinese government of genocide and forced labor in Xinjiang where a large population of Uyghurs live. AI algorithms have also been found to show a “persistent anti-Muslim bias,” by associating violence with the word “Muslim” at a higher rate than with words describing people of other religions including Christians, Jews, Sikhs, and Buddhists. [ 47 ] [ 48 ] [ 50 ] Read More
Con 4 Artificial intelligence poses dangerous privacy risks. Facial recognition technology can be used for passive, warrantless surveillance without knowledge of the person being watched. In Russia, facial recognition was used to monitor and arrest protesters who supported jailed opposition politician Aleksey Navalny , who was found dead in prison in 2024. Russians fear a new facial recognition payment system for Moscow’s metro will increase these sorts of arrests. [ 36 ] [ 37 ] [ 38 ] Ring, the AI doorbell and camera company owned by Amazon, partnered with more than 400 police departments, allowing the police to request footage from users’ doorbell cameras. While users were allowed to deny access to any footage, privacy experts feared the close relationship between Ring and the police could override customer privacy, especially when the cameras frequently record others’ property. The policy ended in 2024, but experts say other companies allow similar invasions. [ 39 ] [ 91 ] AI also follows you on your weekly errands. Target used an algorithm to determine which shoppers were pregnant and sent them baby- and pregnancy-specific coupons in the mail, infringing on the medical privacy of those who may be pregnant, as well as those whose shopping patterns may just imitate pregnant people. [ 40 ] [ 41 ] Moreover, artificial intelligence can be a godsend to crooks. In 2020, a group of 17 criminals defrauded $35 million from a bank in the United Arab Emirates using AI “deep voice” technology to impersonate an employee authorized to make money transfers. In 2019, thieves attempted to steal $240,000 using the same AI technology to impersonate the CEO of an energy firm in the United Kingdom. [ 42 ] Read More

Discussion Questions

1. Is artificial intelligence good for society? Explain your answer(s).

2. What applications would you like to see AI take over? What applications (such as handling our laundry or harvesting fruit and fulfilling food orders) would you like to see AI stay away from. Explain your answer(s).

3. Think about how AI impacts your daily life. Do you use facial recognition to unlock your phone or a digital assistant to get the weather, for example? Do these applications make your life easier or could you live without them? Explain your answers.

Take Action

1. Consider Kai-Fu Lee’s TED Talk argument that AI can “save our humanity .”

2. Listen to AI-expert Toby Walsh discuss the pros and cons of AI in his recent interview at Britannica.  

3. Learn “ everything you need to know about artificial intelligence ” with Nick Heath

4. Examine the “ weird” dangers of AI with Janelle Shane’s TED Talk.

5. Consider how you felt about the issue before reading this article. After reading the pros and cons on this topic, has your thinking changed? If so, how? List two to three ways. If your thoughts have not changed, list two to three ways your better understanding of the “other side of the issue” now helps you better argue your position.

6. Push for the position and policies you support by writing US national senators and representatives .

1.IBM Cloud Education, “Artificial Intelligence (AI),” .com, June 3, 2020
2.Aaron Hertzmann, “This Is What the Ancient Greeks Had to Say about Robotics and AI,” , Mar. 18, 2019
3.Imperial War Museums, “How Alan Turing Cracked the Enigma Code,” (accessed Oct. 7, 2021)
4.Noel Sharkey, “Alan Turing: The Experiment That Shaped Artificial Intelligence,” , June 21, 2012
5.Computer History Museum, “John McCarthy,” (accessed Oct. 7, 2021)
6.Andy Peart, “Homage to John McCarthy, the Father of Artificial Intelligence (AI),” , Oct. 29, 2020
7.Andrew Myers, “Stanford's John McCarthy, Seminal Figure of Artificial Intelligence, Dies at 84,” , Oct. 25, 2011
8.History Computer, “Logic Theorist – Complete History of the Logic Theorist Program,” (accessed Oct. 7, 2021)
9.Melanie Lefkowitz, “Professor’s Perceptron Paved the Way for AI – 60 Years Too Soon,” , Sep. 25, 2019
10.Rockwell Anyoha, “The History of Artificial Intelligence,” , Aug. 28, 2017
11.Victoria Stern, “AI for Surgeons: Current Realities, Future Possibilities,” , July 8, 2021
12.Dan Falk, “How Artificial Intelligence Is Changing Science,” , Mar. 11, 2019
13.European Parliament, “What Is Artificial Intelligence and How Is It Used?,” , Mar. 29, 2021
14.Irene Zueco, “Will AI Solve Your Workplace Safety Problems?,” (accessed Oct. 13, 2021)
15.National Association of Safety Professionals, “How Artificial Intelligence/Machine Learning Can Improve Workplace Health, Safety and Environment,” , Jan. 10, 2020
16.Ryan Quiring, “Smarter Than You Think: AI’s Impact on Workplace Safety,” , June 8, 2021
17.Nick Chrissos, “Introducing AI-SAFE: A Collaborative Solution for Worker Safety,” , Jan 23, 2018
18.Tejpreet Singh Chopra, “Factory Workers Face a Major COVID-19 Risk. Here’s How AI Can Help Keep Them Safe,” , July 29, 2020
19.Mark Bula, “How Artificial Intelligence Can Enhance Workplace Safety as Lockdowns Lift,” , July 29, 2020
20.Carole Martinez, “Artificial Intelligence and Accessibility: Examples of a Technology that Serves People with Disabilities,” , Mar. 5, 2021
21.Noah Rue, “How AI Is Helping People with Disabilities,” rollingwithoutlimits.com, Feb. 25, 2019
22.Jackie Snow, “How People with Disabilities Are Using AI to Improve Their Lives,” , Jan. 30, 2019
23.Bernard Marr, “The 10 Best Examples of How AI Is Already Used in Our Everyday Life,” , Dec. 16, 2019
24.John Koetsier, “AI-Driven Fitness: Making Gyms Obsolete?,” , Aug. 4, 2020
25.Manisha Sahu, “How Is AI Revolutionizing the Fitness Industry?,” , July 9, 2021
26.Amisha, et al., “Overview of Artificial Intelligence in Medicine,” , , July 2019
27.Sarah McQuate, “First Smart Speaker System That Uses White Noise to Monitor Infants’ Breathing,” , Oct. 15, 2019
28.Science Daily, “First AI System for Contactless Monitoring of Heart Rhythm Using Smart Speakers,” sciencedaily.com, Mar. 9, 2021
29.Nicholas Fearn, “Artificial Intelligence Detects Heart Failure from One Heartbeat with 100% Accuracy,” , Sep. 12, 2019
30.Aditya Shah, “Fighting Fire with Machine Learning: Two Students Use TensorFlow to Predict Wildfires,” , June 4, 2018
31.Saad Ansari and Yasir Khokhar, “Using TensorFlow to keep farmers happy and cows healthy,” , Jan. 18, 2018
32.M Umer Mirza, “Top 10 Unusual but Brilliant Use Cases of Artificial Intelligence (AI),” , Sep. 17, 2020
33.Benard Marr, “10 Wonderful Examples Of Using Artificial Intelligence (AI) For Good,” , June 22, 2020
34.Calum McClelland, “The Impact of Artificial Intelligence - Widespread Job Losses,” , July 1, 2020
35.Aaron Smith and Janna Anderson, “AI, Robotics, and the Future of Jobs,” , Aug. 6, 2014
36.ACLU, “Facial Recognition,” (accessed Oct. 15, 2021)
37.Pjotr Sauer, “Privacy Fears as Moscow Metro Rolls out Facial Recognition Pay System,” , Oct. 15, 2021
38.Gleb Stolyarov and Gabrielle Tétrault-Farber, “‘Face Control’: Russian Police Go Digital against Protesters,” , Feb. 11, 2021
39.Drew Harwell, “Doorbell-Camera Firm Ring Has Partnered with 400 Police Forces, Extending Surveillance Concerns,” , Aug. 28, 2019
40.David A. Teich, “Artificial Intelligence and Data Privacy – Turning a Risk into a Benefit,” , Aug. 10, 2020
41.Kashmir Hill, “How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did,” , Feb. 16, 2012
42.Thomas Brewster, “Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find,” , Oct. 14, 2021
43.ACLU, “How is Face Recognition Surveillance Technology Racist?,” , June 16, 2020
44.Alex Najibi, “Racial Discrimination in Face Recognition Technology,” , Oct. 4, 2020
45.Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” , May 23, 2016
46.Stephen Buranyi, “Rise of the Racist Robots – How AI Is Learning All Our Worst Impulses,” , Aug. 8, 2017
47.Paul Mozur, “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority,” , Apr. 14, 2019
48.BBC, “Who Are the Uyghurs and Why Is China Being Accused of Genocide?,” , June 21, 2021
49.Jorge Barrera and Albert Leung, “AI Has a Racism Problem, but Fixing It Is Complicated, Say Experts,” , May 17, 2020
50.Jacob Snow, “Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots,” , July 26, 2018
51.Jack Kelly, “Wells Fargo Predicts That Robots Will Steal 200,000 Banking Jobs within the Next 10 Years,” , Oct. 8, 2019
52.Loss Prevention Media, “How AI Helps Retailers Manage Self-Checkout Accuracy and Loss,” , Sep. 28, 2021
53.Anne Stych, “Self-Checkouts Contribute to Retail Jobs Decline,” , Apr. 8, 2019
54.Retail Technology Innovation Hub, “Retailers Invest Heavily in Self-Checkout Tech amid Covid-19 Outbreak,” retailtechinnovationhub.com, July 6, 2021
55.Retail Consumer Experience, “COVID-19 Drives Grocery Shoppers to Self-Checkout,” , Apr. 8, 2020
56.Daron Acemoglu and Pascual Restrepo, “Tasks, Automation, and the Rise in US Wage Inequality,” , June 2021
57.Jack Kelly, “​​Artificial Intelligence Has Caused A 50% to 70% Decrease in Wages—Creating Income Inequality and Threatening Millions of Jobs,” , June 18, 2021
58.Keith Romer, "How A.I. Conquered Poker," , Jan. 18, 2022
59.Future of Life Institute, "Pause Giant AI Experiments: An Open Letter," futureoflife.org, Mar. 29, 2023
60.Cecilia Kang and David E. Sanger, "Biden Issues Executive Order to Create A.I. Safeguards," nytimes.com, Oct. 30, 2023
61.White House, "FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence," whitehouse.gov, Oct. 30, 2023
62.Harry Guinness, “What Is GPT? Everything You Need to Know about GPT-3 and GPT-4,”zapier.com, Oct. 9, 2023
63.Michael M. Grynbaum and Ryan Mac, “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work,” nytimes.com, Dec. 27, 2023
64.Darian Woods and Adrian Ma, “Artists File Class-Action Lawsuit Saying AI Artwork Violates Copyright Laws,” npr.org, Feb. 3, 2023
65.Dan Milmo, “Sarah Silverman Sues OpenAI and Meta Claiming AI Training Infringed Copyright,” theguardian.com, July 10, 2023
66.Olafimihan Oshin, “Nonfiction Authors Sue OpenAI, Microsoft for Copyright Infringement,” thehill.com, Nov. 22, 2023
67.Matthew Ismael Ruiz, “Music Publishers Sue AI Company Anthropic for Copyright Infringement,” pitchfork.com, Oct, 19, 2023
68.Alexandra Alter and Elizabeth A. Harris, “Franzen, Grisham and Other Prominent Authors Sue OpenAI,” nytimes.com, Sep. 20, 2023
69.Baker & Hostetler LLP, “Case Tracker: Artificial Intelligence, Copyrights and Class Actions,” bakerlaw.com (accessed Feb. 26, 2024)
70.Encyclopaedia Britannica, “Encyclopaedia Britannica, Inc. Terms of Use,” corporate.britannica.com (accessed Feb. 26, 2024)
71.Josh Hawley, “Hawley to Google CEO over Woke Gemini AI Program: ‘Come Testify to Congress. Under Oath. In Public.,’” hawley.senate.gov, Feb. 28, 2024
72.Adi Robertson, “Google Apologizes for ‘Missing the Mark’ after Gemini Generated Racially Diverse Nazis,” theverge.com. Feb. 21, 2024
73.Nick Robins-Early, “Google Restricts AI Chatbot Gemini from Answering Questions on 2024 Elections,” theguardian.com, Mar. 12, 2024
74.Jagmeet Singh, “Google Won’t Let You Use Its Gemini AI to Answer Questions about an Upcoming Election in Your Country,” techcrunch.com, Mar. 12, 2024
75.Federal Communications Commission, “FCC Makes AI-Generated Voices in Robocalls Illegal,” fcc.gov, Feb. 8, 2024
76.Ali Swenson and Will Weissert, “AI Robocalls Impersonate President Biden in an Apparent Attempt to Suppress Votes in New Hampshire,” pbs.org, Jan. 22, 2024
77.Shannon Bond, “The FCC Says AI Voices in Robocalls Are Illegal,” npr.org, Feb. 8, 2024
78.Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains, 2020
79.Shannon Morris, “Stop Saying ChatGPT Is the End of Education—It’s Not,” weareteachers.com, Jane. 12, 2023
80.Juliet Dreamhunter, “33 Mindblowing Ways AI Makes Life Easier in 2024,” juliety.com Jan. 9, 2024
81.Carrie Spector, "What Do AI Chatbots Really Mean for Students and Cheating?," acceleratelearning.stanford.edu, Oct. 31, 2023
82.Aki Peritz, “A.I. Is Making It Easier Than Ever for Students To Cheat,” slate.com, Sep. 6, 2022
83.Mark Massaro, “AI Cheating Is Hopelessly, Irreparably Corrupting Us Higher Education,” thehill.com, Aug. 23, 2023
84.Sibel Erduran, “AI Is Transforming How Science Is Done. Science Education Must Reflect This Change.,” science.org, Dec. 21. 2023
85.Kevin Dykema, “Math and Artificial Intelligence” nctm.org, Nov. 2023
86.Lauren Coffey, “Art Schools Get Creative Tackling AI,” insidehighered.com, Nov. 8, 2023
87.Sayed Fayaz Ahmad, et al., “Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness and Safety in Education,” Humanities and Social Sciences Communications, ncbi.nlm.nih.gov, June 2023
88.Tony Ho Tran, “Robots and AI May Cause Humans To Become Dangerously Lazy,” thedailybeast.com, Oct. 18, 2023
89.Dietlind Helene Cymek, Anna Truckenbrodt, and Linda Onnasch, “Lean Back or Lean In? Exploring Social Loafing in Human–Robot Teams,” frontiersin.org, Oct. 18, 2023
90.Brian Massey, “Is AI The New Groupthink?,” linkedin.com, May 11, 2023
91.Associated Press, “Ring Will No Longer Allow Police to Request Users’ Doorbell Camera Footage,” npr.org, Jan. 25, 2024

ProCon/Encyclopaedia Britannica, Inc. 325 N. LaSalle Street, Suite 200 Chicago, Illinois 60654 USA

Natalie Leppard Managing Editor [email protected]

© 2023 Encyclopaedia Britannica, Inc. All rights reserved

New Topic

  • Social Media
  • Death Penalty
  • School Uniforms
  • Video Games
  • Animal Testing
  • Gun Control
  • Banned Books
  • Teachers’ Corner

Cite This Page

ProCon.org is the institutional or organization author for all ProCon.org pages. Proper citation depends on your preferred or required style manual. Below are the proper citations for this page according to four style manuals (in alphabetical order): the Modern Language Association Style Manual (MLA), the Chicago Manual of Style (Chicago), the Publication Manual of the American Psychological Association (APA), and Kate Turabian's A Manual for Writers of Term Papers, Theses, and Dissertations (Turabian). Here are the proper bibliographic citations for this page according to four style manuals (in alphabetical order):

[Editor's Note: The APA citation style requires double spacing within entries.]

[Editor’s Note: The MLA citation style requires double spacing within entries.]

Essay Service Examples Technology Artificial Intelligence

Is Artificial Intelligence Dangerous? Essay

Table of contents

Introduction to artificial intelligence, the evolution of technology and artificial intelligence, the positive impact of artificial intelligence, potential dangers and ethical considerations of artificial intelligence, the debate: benefits and risks of artificial intelligence, conclusion: the future of humanity with artificial intelligence.

  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee

document

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

reviews

Cite this paper

Related essay topics.

Get your paper done in as fast as 3 hours, 24/7.

Related articles

Is Artificial Intelligence Dangerous? Essay

Most popular essays

  • Artificial Intelligence
  • Intelligence

Artificial Intelligence, also known as AI, is amongst the latest trend in today’s world. AI is...

In a world where technology plays a significant role in individual lives, technology focuses on...

Starting from Turing test in 1950, Artificial Intelligence has been brought on public notice for...

The complexity and height of data in healthcare means that artificial intelligence (AI) is...

Artificial Intelligence (AI) can be defined as the capability of a computer based system to think...

  • Perspective

Artificial Intelligence is a quickly growing industry, and one that is changing all the time. The...

Here, by systems thinking gender bias and sustainability challenges, the issues with artificial...

For quite some time, specialists have worried about the unforeseen impacts of artificial...

While intelligence collection plays a major role in the intelligence cycle, it is significant in...

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via [email protected].

We are here 24/7 to write your paper in as fast as 3 hours.

Provide your email, and we'll send you this sample!

By providing your email, you agree to our Terms & Conditions and Privacy Policy .

Say goodbye to copy-pasting!

Get custom-crafted papers for you.

Enter your email, and we'll promptly send you the full essay. No need to copy piece by piece. It's in your inbox!

Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

CBSE Related Links

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

artificial intelligence is not dangerous essay

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

artificial intelligence is not dangerous essay

Invest in news coverage you can trust.

Donate to PBS NewsHour by June 30 !

The potential dangers as artificial intelligence grows more sophisticated and popular

Geoff Bennett

Geoff Bennett Geoff Bennett

Courtney Norris

Courtney Norris Courtney Norris

Dorothy Hastings Dorothy Hastings

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/show/the-potential-dangers-as-artificial-intelligence-grows-more-sophisticated-and-popular

Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors. But as AI grows more sophisticated and popular, the voices warning against the potential dangers are growing louder. Geoff Bennett discussed the concerns with Seth Dorbin of the Responsible AI Institute.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

Geoff Bennett:

Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors.

But as A.I. grows more sophisticated and popular, the voices warning against the potential dangers are growing louder. Italy has become the first Western nation to temporarily ban the A.I. tool ChatGPT over data privacy concerns, and more European countries are expected to follow suit.

Here at home, President Biden met yesterday with a team of science and tech advisers on the issue and said tech companies must ensure their A.I. products are safe for consumers.

We're joined now by Seth Dobrin, president of the Responsible A.I. Institute and former global chief artificial intelligence officer for IBM.

It's great to have you here.

Seth Dobrin, President, Responsible A.I. Institute:

Yes, thanks for having me, Geoff. I really appreciate it.

And most people, when they think of A.I., they're thinking of Siri on their cell phones. They're thinking of Alexa or the Google Assistant.

What kind of advanced A.I. technology are we talking about here? What can it do?

Seth Dobrin:

Yes, so what we're talking about here is primarily technology called large language models or foundational models.

These are very, very large models that are trained, essentially, on the whole of the Internet. And that's the promise, as well as the scary thing about them is that the Internet basically reflects human behavior, human norms, the good, the bad about us. And the A.I. is trained on that same information.

And so for, instance, OpenAI, which is the company that built ChatGPT, which most everyone in the world is aware of at this point…

There are a few who still aren't, but…

Yes, a few who still aren't, yes.

But it was trained on Reddit, right, which, from a content perspective, is really not where I would pick. But from how do you train a machine to understand how humans converse, it's great.

And so it's pulling the good and the bad from the Internet, and it does this in a way…

Because, we should say, Reddit is like a chat site.

Yes, yes, Reddit is a chat site. And you get all these bad conversations going on and things called subreddits. And so there's a lot of hate, there's a lot of misogyny, there's a lot of racism that's in the various subreddits, if you will.

And if you think about what it's ultimately trying — what it's ultimately doing, it's essentially — think of it as auto-complete, but on a lot of steroids, because all it's doing is, it's predicting what's going to happen next based on what you put into it.

Well, the concerns about the potential risks are so great that more than 1,000 tech leaders and academics wrote this letter recently, as you know, calling for a temporary halt of advanced A.I. development

And part of it reads this way: "Recent months have seeing A.I. labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control."

What is happening in the industry that is causing that kind of alarm?

So, I think — I think there is some concern, to be honest.

This technology was let out of the bag, it was put into the wild in a way that any human can use it in the form of a conversational interface ChatGPT. The same technology has been out available for A.I. engineers and data scientists, which are the professionals and that work in this field, for a number of years now.

But it's been in a — what's called a closed beta, meaning only approved people could get access to it. In that controlled environment, it was good, because OpenAI and others — OpenAI makes ChatGPT — and others were able to interact with it and learn and give them feedback, like things like, when the first one came out, you could put in what is Seth Dobrin's Social Security number, and it would give it to you, right?

Or my — what is every address Seth has ever lived at? And it would give it to you. It doesn't do that anymore. But these are the kinds of things that, in the closed environment, could be controlled.

Now, putting this out in the wild is — there's been lots of pick your own metaphor, right, your own nihilistic metaphor. It's like giving people — the world uranium and not teaching them how to build a nuclear reactor, or giving them a bioagent, and not teaching them about how to control it.

It's really that — can be that scary. But there are some things that companies can do and should do to get it under control.

So, I think if you look at what the E.U. is doing, so they have an A.I. regulation that's regulating outcomes. So anything that impacts health, wealth, or livelihood of a human should be regulated.

There's also — so, I'm president of the Responsible A.I. Institute. What we do is, we build — so the letter also calls for tools to assess these things. That's what we do that. We are a nonprofit, and we build tools that are align to global standards. So, some of your viewers have probably heard of ISO standards, or CE. You have a CE stamper or UL stamp on every lightbulb you ever look at.

We build standards for — we build a ways to align or conform to standards for A.I. And they're applicable to these types of A.I. as well. But what's important — and this gets to the heart of the letter as well — is, we don't try and understand what the model is doing. We measure the outcome, because, quite honestly, if you or I are getting a mortgage, we don't care if the model is biased.

What we care is, is the outcome biased, right? We don't necessarily need the model explained. We need to understand why a decision was made. And it's typically the interaction between the A.I. and the human that drives that, not just the A.I. and not just the human.

We have about 30 seconds left.

It strikes me that the industry is going to have to police itself, because this technology is advancing so quickly that governments can't keep pace with the legislation and the regulations required.

Yes, I mean, I think it's not much different than we saw with social media, right?

I mean, I think if you were to bring Sam Altman to Congress, probably get about as good responses as Mark Zuckerberg did, right? The congresspeople need to really educate themselves. If we, as citizens of the U.S. and of the world really think this is something that we want the governments to regulate, we need to make that a ballot box issue, and not some of these other things that we're voting on that I think are less impactful.

Seth Dobrin thanks so much for your insights and for coming in. It's good to see you.

Yes, thanks for having me, Geoff. Really appreciate it.

Listen to this Segment

Taiwan's President Tsai Ing-wen meets the U.S. Speaker of the House Kevin McCarthy, in Simi Valley, California

Watch the Full Episode

Geoff Bennett serves as co-anchor of PBS NewsHour. He also serves as an NBC News and MSNBC political contributor.

Courtney Norris is the deputy senior producer of national affairs for the NewsHour. She can be reached at [email protected] or on Twitter @courtneyknorris

Support Provided By: Learn more

Support PBS NewsHour:

NewsMatch

More Ways to Watch

  • PBS iPhone App
  • PBS iPad App

Educate your inbox

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

Cunard

More From Forbes

Ai is not an existential threat, but humans using ai are.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Is AI on the verge of wiping out humanity? Or does the existential threat come from something — or ... [+] someone — else? Photo by LUIS ACOSTA/AFP via Getty Images

In the weeks following OpenAI’s launch of ChatGPT in November 2022, we all asked ourselves the same question: Is AI going to replace us? After a few months, business school professors , economists , and management consultants seemed to agree on an answer: AI won’t replace humans, but humans using AI will.

Frontier AI companies could hardly have come up with a more business-promoting slogan themselves. Now, thanks to the "Pearl Harbor moment" of ChatGPT, they didn’t have to. The panic at the highest levels of the big tech companies spread to the highest levels of society, research institutions, educational organizations and businesses. Soon everyone was telling each other that everything would be fine if only they started using AI now. As in right now!

Still, questions about the existential risks of AI continue to haunt the big tech companies. And this week, current and former employees of OpenAI and Google DeepMind published an open letter warning that loss of control over autonomous AI systems could potentially result in human extinction.

But is AI really on the verge of wiping out humanity? Or does the existential threat come from something — or someone — else?

To Be, Or Not To Be In A Computer Simulation

In his 2013 TEDx Talk , Founding Director of the Future of Humanity Institute at Oxford University, Nick Bostrom defined existential risk as “one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.” His definition of premature human extinction was "before reaching technological maturity."

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

Six years earlier, Bostrom identified death as one of humanity’s three biggest problems (the others were existential risk and that life isn’t usually as good as it could be). His understanding of the problems threatening humanity turned out to resonate with how tech entrepreneurs like Elon Musk think and talk about human existence: as something so intangible that it might as well be a computer simulation.

It makes sense that someone who believes that human existence comes down to simple mathematical probability would also believe that the things that threaten this existence must be understood and dealt with using mathematical modelling.

But speaking of probability, it is almost 100% certain that the greatest existential thinkers in the history of philosophy would disagree with Bostrom and Musk that human existence — and the threat against it — can be understood by calculating the risk of extinction. Or, for that matter, by weighing pros and cons of superintelligence .

Kings Of Wishful Thinking

While existential risk was not at the top of most C-suite agendas until Bostrom and others introduced the term in relation to AI, philosophers have been discussing what it means to be — and cease to be — human for centuries.

Yet neither Soeren Kierkegaard (1813-55), Friedrich Nietzsche (1844-1900), Martin Heidegger (1889-1976), Jean-Paul Sartre (1905-1980) nor others who are considered among history's greatest existential thinkers spent their time speculate about human extinction. Instead of worrying about humans becoming extinct in the future, they worried about humans forgetting what it means to be human now . As in right now.

Existential philosophers don’t think about death as a problem, but as an existential condition that reminds us to spend our time wisely. That said, they were painfully aware that the knowledge that we are going to die comes with an anxiety that can lead to wishful thinking that death can be delayed or avoided altogether. But it is indeed wishful thinking. Not because advanced technology will never be able to extend our lives potentially indefinitely — it’s not for existential philosophers to say — but because such an indefinite life will no longer be a human life.

To Exist, Or To Ex-ist, That Is The Question

Unlike Musk, existential philosophers don’t consider human existence intangible. On the contrary, they would say, it is our tangible — and transitory — bodies that enable us to ex-ist. With this unusual way of writing ex-istence, the German philosopher Martin Heidegger wanted to highlight the human ability to understand ourselves as something and someone different from something and someone else .

As humans we not only exist like everything else that lives and breathes on this earth, we also ex-ist in the sense that we ask ourselves who and what we are — and who and what we should be. To ignore this ex-istential feature of human existence is to think of ourselves and each other as simpler creatures than we are.

It might be tempting if you are in the business of creating machines that can imitate human thinking and behavior. But it might also cause you — and the people using your machines — to lose touch with what it means to be human.

The Greatest Threat To Humans Is Humans

Understanding ourselves as something and someone different from something and someone else is what makes us take responsibility for something and someone other than ourselves. As humans, it is never in our self-interest to serve only our self-interest. As the phenomenon of suicide proves, survival is never enough.

As individuals as well as humanity, we need a greater purpose than to avoid extinction. And we need each other to remind us of that. That’s why this article is titled "AI Is Not An Existential Threat, But Humans Using AI Are:" Because the pursuit of technological maturity risks leading to ex-istential immaturity. And not only in the people who develop the technology, but also in those who use it.

The greatest threat to humans is not technology, but humans who have forgotten that there is more to being human than what can be imitated and replaced by technology. Just as it is not for existential philosophers to say whether advanced technology will be able to extend our lives indefinitely, it should not be for math experts and tech founders to tell us how to understand and live our lives. Deciding what is worth living and striving for is a job for each and everyone of us. One that AI can never replace.

The best way to remember that is to stop seeing ourselves and our surroundings through the lens of technology and start seeing it through the eyes, body and ex-istence that comes with being human.

Afterall, there are greater things in life than reaching technological maturity. And forgetting that is a greater threat to humanity than extinction.

Pia Lauritzen

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

IMAGES

  1. What is Artificial Intelligence Free Essay Example

    artificial intelligence is not dangerous essay

  2. Argumentative Essay On Artificial Intelligence Free Essay Example

    artificial intelligence is not dangerous essay

  3. Artificial Intelligence A Threat Or Not Computer Science Free Essay Example

    artificial intelligence is not dangerous essay

  4. Artificial Intelligence Essay

    artificial intelligence is not dangerous essay

  5. Is AI a threat to Humankind? Free Essay Example

    artificial intelligence is not dangerous essay

  6. Artificial Intelligence Essay: 500+ Words Essay for Students

    artificial intelligence is not dangerous essay

VIDEO

  1. Artificial intelligence- the death of creativity. CSS 2024 essay paper

  2. April 2, 2024

  3. Artificial intelligence... essay

  4. Artificial intelligence,essay

  5. How Artificial Intelligence is Dangerous for us

  6. ESSAY ON ARTIFICIAL INTELLIGENCE (AI) #artificialintelligence #englishessay #ai

COMMENTS

  1. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    A 2023 survey of AI experts found that 36 percent fear that AI development may result in a "nuclear-level catastrophe.". Almost 28,000 people have signed on to an open letter written by the ...

  2. What Exactly Are the Dangers Posed by AI?

    Medium-Term Risk: Job Loss. Oren Etzioni, the founding chief executive of the Allen Institute for AI, a lab in Seattle, said "rote jobs" could be hurt by A.I. Kyle Johnson for The New York ...

  3. Artificial Intelligence: The Good, the Bad, and the Ugly

    The artificial intelligence (AI) we see today is the product of the field's journey from simple, brute force methodologies to complex, learning-based models that closely mimic the human brain's functionality. Early AI was effective for specific tasks like playing chess or Jeopardy!, but it was limited by the necessity of pre-programming every possible scenario.

  4. AI Is Not Actually an Existential Threat to Humanity, Scientists Say

    Dr George Montanez, AI expert from Harvey Mudd College highlights that "robots and AI systems do not need to be sentient to be dangerous; they just have to be effective tools in the hands of humans who desire to hurt others. That is a threat that exists today." Even without malicious intent, today's AI can be threatening.

  5. AI isn't dangerous, but human bias is

    Unmanaged AI is a mirror for human bias. One way that AI can cause harm is when algorithms reflect our human biases in the datasets that organizations collect. The effects of these biases can compound in the AI era, as the algorithms themselves continue to "learn" from the data. Let's imagine, for example, that a bank wants to predict ...

  6. Opinion

    June 30, 2023. In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. "Mitigating the risk of ...

  7. The Case Against AI Everything, Everywhere, All at Once

    Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing today's attention, machine learning, has expanded beyond predicting our behavior to ...

  8. New report assesses progress and risks of artificial intelligence

    AI100 is an ongoing project hosted by the Stanford University Institute for Human-Centered Artificial Intelligence that aims to monitor the progress of AI and guide its future development. This new report, the second to be released by the AI100 project, assesses developments in AI between 2016 and 2021. "In the past five years, AI has made ...

  9. SQ10. What are the most pressing dangers of AI?

    Techno-Solutionism. One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool. 3 As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases. But technology often creates larger problems in the process of solving smaller ones.

  10. The Risks of AI-Assisted Academic Writing

    Artificial intelligence (AI)-powered writing tools are becoming increasingly popular among researchers. AI tools can improve several important aspects of writing, such as readability, grammar, spelling, and tone, providing authors with a competitive edge when drafting grant proposals and academic articles.

  11. Opinion

    In an essay that was published last week, " How A.I. Will Change Democracy ," Schneier wrote: A.I. can engage with voters, conduct polls and fund-raise at a scale that humans cannot — for ...

  12. Is Artificial Intelligence Dangerous?: [Essay Example], 623 words

    Artificial Intelligence (AI) has been a topic of fascination and concern for decades. As technology continues to advance, AI's capabilities have grown exponentially, raising questions about its potential risks and benefits. The debate over whether artificial intelligence is dangerous is a complex and multifaceted one, with arguments on both sides.

  13. The true dangers of AI are closer than we think

    October 21, 2020. William Isaac began researching bias in predictive policing algorithms in 2016. David Vintiner. As long as humans have built machines, we've feared the day they could destroy ...

  14. The impact of artificial intelligence on human society and bioethics

    Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate.

  15. The Pros And Cons Of Artificial Intelligence

    Reduces employment. We're on the fence about this one, but it's probably fair to include it because it's a common argument against the use of AI. Some uses of AI are unlikely to impact human ...

  16. 12 Dangers of Artificial Intelligence (AI)

    Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI. 1. Lack of AI Transparency and Explainability. AI and deep learning models can be difficult to understand, even for those that work directly with the technology.

  17. What is artificial intelligence? Your AI questions, answered.

    AI systems determine what you'll see in a Google search or in your Facebook Newsfeed. They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy ...

  18. Is Artificial Intelligence Good for Society? Top 3 Pros and Cons

    AI robots can collaborate with or replace humans for especially dangerous tasks. For example, 50% of construction companies that used drones to inspect roofs and other risky tasks saw improvements in safety. [ 14] [ 15] Artificial intelligence can also help humans be safer.

  19. Is Artificial Intelligence Dangerous? Essay

    Artificial intelligence is the theory and development of computer systems able to perform tasks that are normally require human intelligence, such as visual perception, speech recognition, decision making and translation between languages. It truly has lots of advantages in the advancement of our daily living.

  20. 500+ Words Essay on Artificial Intelligence

    Artificial Intelligence Essay. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. ... But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any ...

  21. The potential dangers as artificial intelligence grows more ...

    Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors. But as AI grows more ...

  22. Opinion: The risks of AI could be catastrophic. We should empower ...

    Companies operating in the field of AGI — artificial general intelligence, which broadly speaking refers to the theoretical AI research attempting to create software with human-like intelligence ...

  23. Is Artificial Intelligence Dangerous?

    AI could be the best thing ever but if we don't continue to develop its technology because we are scare who knows what we are going to be missing, AI could very well be the things that saves this Earth from us, the humans who are destroying it. Work Cited. "AI's Coming of Age.". Artificial-Intelligence,

  24. AI Is Not An Existential Threat, But Humans Using AI Are

    After a few months, business school professors, economists, and management consultants seemed to agree on an answer: AI won't replace humans, but humans using AI will. Frontier AI companies ...