Home — Essay Samples — Information Science and Technology — Artificial Intelligence — Is Artificial Intelligence Dangerous?

test_template

Is Artificial Intelligence Dangerous?

  • Categories: Artificial Intelligence

About this sample

close

Words: 623 |

Published: Sep 16, 2023

Words: 623 | Page: 1 | 4 min read

Table of contents

The promise of ai, the perceived dangers of ai, responsible ai development.

  • Medical Advancements: AI can assist in diagnosing diseases, analyzing medical data, and developing personalized treatment plans, potentially saving lives and improving healthcare outcomes.
  • Autonomous Vehicles: Self-driving cars, powered by AI, have the potential to reduce accidents and make transportation more accessible and efficient.
  • Environmental Conservation: AI can be used to monitor and address environmental issues, such as climate change, deforestation, and wildlife preservation.
  • Efficiency and Automation: AI-driven automation can streamline processes in various industries, increasing productivity and reducing costs.
  • Job Displacement
  • Bias and Discrimination
  • Lack of Accountability
  • Security Risks
  • Transparency and Accountability
  • Fairness and Bias Mitigation
  • Ethical Frameworks
  • Cybersecurity Measures

This essay delves into the complexities surrounding artificial intelligence (AI), exploring both its transformative benefits and potential dangers. From enhancing healthcare and transportation to posing risks in job displacement and security, it critically assesses AI’s dual aspects. Emphasizing responsible development, it advocates for transparency, fairness, and robust cybersecurity measures. For a deeper understanding, students can check more AI websites for students which offer further resources and expert guidance.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 639 words

6 pages / 2660 words

2 pages / 1011 words

3 pages / 1371 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Artificial Intelligence

The dawn of the 21st century has witnessed an unprecedented technological revolution, primarily driven by the rapid advancement of Artificial Intelligence (AI). AI, once relegated to the realm of science fiction, has become an [...]

Artificial intelligence (AI) has rapidly evolved in recent years, making significant advancements in various fields such as healthcare, finance, and manufacturing. This technology holds immense potential for transformative [...]

Artificial intelligence (AI) has become one of the most transformative technologies of the 21st century, with applications ranging from healthcare and education to transportation and finance. The rapid advancements in AI have [...]

Radiology, a field rooted in the visualization of the human body, has undergone a transformative journey with the integration of artificial intelligence (AI). This essay explores the burgeoning relationship between radiology and [...]

ARTIFICIAL INTELLIGENCE IN MEDICAL TECHNOLOGY What is ARTIFICIAL INTELLIGENCE? The term AI was devised by John McCarthy, an American computer scientist, in 1956. AI or artificial intelligence is the stimulation of human [...]

“Do you like human beings?” Edward asked. “I love them” Sophia replied. “Why?” “I am not sure I understand why yet” The conversation above is from an interview for Business Insider between a journalist [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay on artificial intelligence is dangerous

12 Risks and Dangers of Artificial Intelligence (AI)

essay on artificial intelligence is dangerous

As AI grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder.

“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening,” said Geoffrey Hinton , known as the “Godfather of AI” for his foundational work on machine learning and neural network algorithms. In 2023, Hinton left his position at Google so that he could “ talk about the dangers of AI ,” noting a part of him even regrets his life’s work .

The renowned computer scientist isn’t alone in his concerns.

Tesla and SpaceX founder Elon Musk, along with over 1,000 other tech leaders, urged in a 2023 open letter to put a pause on large AI experiments, citing that the technology can “pose profound risks to society and humanity.”

Dangers of Artificial Intelligence

  • Automation-spurred job loss
  • Privacy violations
  • Algorithmic bias caused by bad data
  • Socioeconomic inequality
  • Market volatility
  • Weapons automatization
  • Uncontrollable self-aware AI

Whether it’s the increasing automation of certain jobs , gender and racially biased algorithms or autonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of.

12 Dangers of AI

Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.

Is AI Dangerous?

The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.

1. Lack of AI Transparency and Explainability 

AI and deep learning models can be difficult to understand, even for those that work directly with the technology . This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of explainable AI , but there’s still a long way before transparent AI systems become common practice.

2. Job Losses Due to AI Automation

AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing , manufacturing and healthcare . By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change — according to McKinsey . Goldman Sachs even states 300 million full-time jobs could be lost to AI automation.

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise, though, “I don’t think that’s going to continue.”

As AI robots become smarter and more dexterous, the same tasks will require fewer humans. And while AI is estimated to create 97 million new jobs by 2025 , many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t upskill their workforces .

“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those are the things that, at least so far, computers are not very good at.”

Even professions that require graduate degrees and additional post-college training aren’t immune to AI displacement.

As technology strategist Chris Messina has pointed out, fields like law and accounting are primed for an AI takeover. In fact, Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a massive shakeup.”

“Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure,” he said in regards to the legal field. “It’s a lot of attorneys reading through a lot of information — hundreds or thousands of pages of data and documents. It’s really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.”

More on Artificial Intelligence AI Copywriting: Why Writing Jobs Are Safe

3. Social Manipulation Through AI Algorithms

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election. 

TikTok, which is just one example of a social media platform that relies on AI algorithms , fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns over TikTok’s ability to protect its users from misleading information. 

Online media and news have become even murkier in light of AI-generated images and videos, AI voice changers as well as deepfakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, audio clips or replace the image of one figure with another in an existing picture or video. As a result, bad actors have another avenue for sharing misinformation and war propaganda , creating a nightmare scenario where it can be nearly impossible to distinguish between creditable and faulty news. 

“No one knows what’s real and what’s not,” Ford said. “So it really leads to a situation where you literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence... That’s going to be a huge issue.”

More on Artificial Intelligence How to Spot Deepfake Technology

4. Social Surveillance With AI Technology

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example is China’s use of facial recognition technology in offices, schools and other venues. Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views. 

Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, which disproportionately impact Black communities . Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.

“Authoritarian regimes use or are going to use it,” Ford said. “The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?”

Related Are Police Robots the Future of Law Enforcement?

5. Lack of Data Privacy Using AI Tools

If you’ve played around with an AI chatbot or tried out an AI face filter online, your data is being collected — but where is it going and how is it being used? AI systems often collect personal data to customize user experiences or to help train the AI models you’re using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred with ChatGPT in 2023 “ allowed some users to see titles from another active user’s chat history .” While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm experienced by AI. 

6. Biases Due to AI

Various forms of AI bias are detrimental too. Speaking to the New York Times , Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race . In addition to data and algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and humans are inherently biased .

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

The limited experiences of AI creators may explain why speech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of a chatbot impersonating notorious figures in human history. Developers and businesses should exercise greater care to avoid recreating powerful biases and prejudices that put minority populations at risk.  

7. Socioeconomic Inequality as a Result of AI 

If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their DEI initiatives through AI-powered recruiting . The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same discriminatory hiring practices businesses claim to be eliminating.  

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. Workers who perform more manual, repetitive tasks have experienced wage declines as high as 70 percent because of automation, with office and desk workers remaining largely untouched in AI’s early stages. However, the increase in generative AI use is already affecting office jobs , making for a wide range of roles that may be more vulnerable to wage or job loss than others.

Sweeping claims that AI has somehow overcome social boundaries or created more jobs fail to paint a complete picture of its effects. It’s crucial to account for differences based on race, class and other categories. Otherwise, discerning how AI and automation benefit certain individuals and groups at the expense of others becomes more difficult.

8. Weakening Ethics and Goodwill Because of AI

Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI’s potential pitfalls. In a 2023 Vatican meeting and in his message for the 2024 World Day of Peace , Pope Francis called for nations to create and adopt a binding international treaty that regulates the development and use of AI.

Pope Francis warned against AI’s ability to be misused, and “create statements that at first glance appear plausible but are unfounded or betray biases.” He stressed how this could bolster campaigns of disinformation, distrust in communications media, interference in elections and more — ultimately increasing the risk of “fueling conflicts and hindering peace.” 

The rapid rise of generative AI tools gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. Plus, biased AI could be used to determine whether an individual is suitable for a job, mortgage, social assistance or political asylum, producing possible injustices and discrimination, noted Pope Francis. 

“The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms,” he said. “And that capacity cannot be reduced to programming a machine.”

More on Artificial Intelligence What Are AI Ethics?

9. Autonomous Weapons Powered By AI

As is too often the case, technological advancements have been harnessed for the purpose of warfare. When it comes to AI, some are keen to do something about it before it’s too late: In a 2016 open letter , over 30,000 individuals, including AI and robotics researchers, pushed back against the investment in AI-fueled autonomous weapons. 

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

This prediction has come to fruition in the form of Lethal Autonomous Weapon Systems , which locate and destroy targets on their own while abiding by few regulations. Because of the proliferation of potent and complex weapons, some of the world’s most powerful nations have given in to anxieties and contributed to a tech cold war .  

Many of these new weapons pose major risks to civilians on the ground, but the danger becomes amplified when autonomous weapons fall into the wrong hands. Hackers have mastered various types of cyber attacks , so it’s not hard to imagine a malicious actor infiltrating autonomous weapons and instigating absolute armageddon.  

If political rivalries and warmongering tendencies are not kept in check, artificial intelligence could end up being applied with the worst intentions. Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we’re going to keep pushing the envelope with it if there’s money to be made.   

“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”

10. Financial Crises Brought About By AI Algorithms

The financial industry has become more receptive to AI technology’s involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets.

While AI algorithms aren’t clouded by human judgment or emotions, they also don’t take into account contexts , the interconnectedness of markets and factors like human trust and fear. These algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility.

Instances like the 2010 Flash Crash and the Knight Capital Flash Crash serve as reminders of what could happen when trade-happy algorithms go berserk, regardless of whether rapid and massive trading is intentional.  

This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help investors make smarter and more informed decisions on the market. But finance organizations need to make sure they understand their AI algorithms and how those algorithms make decisions. Companies should consider whether AI raises or lowers their confidence before introducing the technology to avoid stoking fears among investors and creating financial chaos.

11. Loss of Human Influence

An overreliance on AI technology could result in the loss of human influence — and a lack in human functioning — in some parts of society. Using AI in healthcare could result in reduced human empathy and reasoning , for instance. And applying generative AI for creative endeavors could diminish human creativity and emotional expression . Interacting with AI systems too much could even cause reduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community.

12. Uncontrollable Self-Aware AI

There also comes a worry that AI will progress in intelligence so rapidly that it will become sentient , and act beyond humans’ control — possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbot LaMDA was sentient and speaking to him just as a person would. As AI’s next big milestones involve making systems with artificial general intelligence , and eventually artificial superintelligence , cries to completely stop these developments continue to rise .

More on Artificial Intelligence What Is the Eliza Effect?

How to Mitigate the Risks of AI

AI still has numerous benefits , like organizing health data and powering self-driving cars. To get the most out of this promising technology, though, some argue that plenty of regulation is necessary.

“There’s a serious danger that we’ll get [AI systems] smarter than us fairly soon and that these things might get bad motives and take control,” Hinton told NPR . “This isn’t just a science fiction problem. This is a serious problem that’s probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now.”

Develop Legal Regulations

AI regulation has been a main focus for dozens of countries , and now the U.S. and European Union are creating more clear-cut measures to manage the rising sophistication of artificial intelligence. In fact, the White House Office of Science and Technology Policy (OSTP) published the AI Bill of Rights in 2022, a document outlining to help responsibly guide AI use and development. Additionally, President Joe Biden issued an executive order in 2023 requiring federal agencies to develop new rules and guidelines for AI safety and security.

Although legal regulations mean certain AI technologies could eventually be banned, it doesn’t prevent societies from exploring the field.  

Ford argues that AI is essential for countries looking to innovate and keep up with the rest of the world.

“You regulate the way AI is used, but you don’t hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous,” Ford said. “We decide where we want AI and where we don’t; where it’s acceptable and where it’s not. And different countries are going to make different choices.”

More on Artificial Intelligence Will This Election Year Be a Turning Point for AI Regulation?

Establish Organizational AI Standards and Discussions

On a company level, there are many steps businesses can take when integrating AI into their operations. Organizations can develop processes for monitoring algorithms, compiling high-quality data and explaining the findings of AI algorithms. Leaders could even make AI a part of their company culture and routine business discussions, establishing standards to determine acceptable AI technologies.

Guide Tech With Humanities Perspectives

Though when it comes to society as a whole, there should be a greater push for tech to embrace the diverse perspectives of the humanities . Stanford University AI researchers Fei-Fei Li and John Etchemendy make this argument in a 2019 blog post that calls for national and global leadership in regulating artificial intelligence:   

“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS).”

Balancing high-tech innovation with human-centered thinking is an ideal method for producing responsible AI technology and ensuring the future of AI remains hopeful for the next generation. The dangers of artificial intelligence should always be a topic of discussion, so leaders can figure out ways to wield the technology for noble purposes. 

“I think we can talk about all these risks, and they’re very real,” Ford said. “But AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face.”

Frequently Asked Questions

What is ai.

AI (artificial intelligence) describes a machine's ability to perform tasks and mimic intelligence at a similar level as humans.

Is AI dangerous?

AI has the potential to be dangerous, but these dangers may be mitigated by implementing legal regulations and by guiding AI development with human-centered thinking.

Can AI cause human extinction?

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What happens if AI becomes self-aware?

Self-aware AI has yet to be created, so it is not fully known what will happen if or when this development occurs.

Some suggest self-aware AI may become a helpful counterpart to humans in everyday living, while others suggest that it may act beyond human control and purposely harm humans.

Great Companies Need Great People. That's Where We Come In.

Major new report explains the risks and rewards of artificial intelligence

Person holding up a post-it note with 'A.I' written on it.

AI has begun to permeate every aspect of our lives. Image:  Unsplash/Hitesh Choudhary

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Toby Walsh

Liz sonenberg.

essay on artificial intelligence is dangerous

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, emerging technologies.

  • A new report has just been released, highlighting the changes in AI over the last 5 years, and predicted future trends.
  • It was co-written by people across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics.
  • In the last 5 years, AI has become an increasing part of our lives, revolutionizing a number of industries, but is still not free from risk.

A major new report on the state of artificial intelligence (AI) has just been released . Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.

From language generation and molecular medicine to disinformation and algorithmic bias, AI has begun to permeate every aspect of our lives.

The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits.

A century-long study of AI

The report comes out of the AI100 project , which aims to study and anticipate the effects of AI rippling out through our lives over the course of the next 100 years.

AI100 produces a new report every five years: the first was published in 2016, and this is the second. As two points define a line, this second report lets us see the direction AI is taking us in.

One of us (Liz Sonenberg) is a member of the standing committee overseeing the AI100 project, and the other (Toby Walsh) was on the study panel that wrote this particular report. Members of the panel came from across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics.

AI100 standing committee chair Peter Stone takes a shot against a robot goalie at RoboCup 2019 in Sydney.

The promises and perils of AI are becoming real

The report highlights the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a “real-world impact on people, institutions, and culture”. Read the news on any given day and you’re likely to find multiple stories about some new advance in AI or some new use of AI.

For example, in natural language processing (NLP), computers can now analyse and even generate realistic human language. To demonstrate, we asked Open AI’s GPT-3 system , one of the largest neural networks ever built, to summarise the AI100 report for you. It did a pretty good job, even if the summary confronts our sense of self by being written in the first person:

In the coming decade, I expect that AI will play an increasingly prominent role in the lives of people everywhere. AI-infused services will become more common, and AI will become increasingly embedded in the daily lives of people across the world.

I believe that this will bring with it great economic and societal benefits, but that it will also require us to address the many challenges to ensure that the benefits are broadly shared and that people are not marginalised by these new technologies.

A key insight of AI research is that it is easier to build things than to understand why they work. However, defining what success looks like for an AI application is not straightforward.

For example, the AI systems that are used in healthcare to analyse symptoms, recommend diagnoses, or choose treatments are often far better than anything that could be built by a human, but their success is hard to quantify.

As a second example of the recent and remarkable progress in AI, consider the latest breakthrough from Google’s DeepMind. AlphaFold is an AI program that provides a huge step forward in our ability to predict how proteins fold.

This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery. Most of the planet now knows to their cost how the unique shape of the spike proteins in the SARS-CoV-2 virus are key to its ability to invade our cells, and also to the vaccines developed to combat its deadly progress.

The AI100 report argues that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence.

Once again, it’s easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools. For instance, Facebook uses machine learning to sort its news feed and give each of its 2 billion users an unique but often inflammatory view of the world.

Algorithmic bias in action: ‘depixelising’ software makes a photo of former US president Barack Obama appear ethnically white.

The World Economic Forum was the first to draw the world’s attention to the Fourth Industrial Revolution, the current period of unprecedented change driven by rapid technological advances. Policies, norms and regulations have not been able to keep up with the pace of innovation, creating a growing need to fill this gap.

The Forum established the Centre for the Fourth Industrial Revolution Network in 2017 to ensure that new and emerging technologies will help—not harm—humanity in the future. Headquartered in San Francisco, the network launched centres in China, India and Japan in 2018 and is rapidly establishing locally-run Affiliate Centres in many countries around the world.

The global network is working closely with partners from government, business, academia and civil society to co-design and pilot agile frameworks for governing new and emerging technologies, including artificial intelligence (AI) , autonomous vehicles , blockchain , data policy , digital trade , drones , internet of things (IoT) , precision medicine and environmental innovations .

Learn more about the groundbreaking work that the Centre for the Fourth Industrial Revolution Network is doing to prepare us for the future.

Want to help us shape the Fourth Industrial Revolution? Contact us to find out how you can become a member or partner.

The time to act is now

It’s clear we’re at an inflection point: we need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional, like deepfake videos, or unintended, like algorithms that reinforce racial and other biases.

AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimising negative impacts on society and enhancing the positives requires consideration from across academia and with societal input.

Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI. But they remain behind the curve.

A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.

At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.

AI will have failed if it harms or devalues the very people we are trying to help.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:

The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Emerging Technologies .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

essay on artificial intelligence is dangerous

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

essay on artificial intelligence is dangerous

4 steps for the Middle East and North Africa to develop 'intelligent economies' 

Maroun Kairouz

essay on artificial intelligence is dangerous

The future of learning: How AI is revolutionizing education 4.0

Tanya Milberg

April 28, 2024

essay on artificial intelligence is dangerous

Shaping the Future of Learning: The Role of AI in Education 4.0

essay on artificial intelligence is dangerous

The cybersecurity industry has an urgent talent shortage. Here’s how to plug the gap

Michelle Meineke

essay on artificial intelligence is dangerous

Stanford just released its annual AI Index report. Here's what it reveals

April 26, 2024

July 12, 2023

AI Is an Existential Threat—Just Not the Way You Think

Some fear that artificial intelligence will threaten humanity’s survival. But the existential risk is more philosophical than apocalyptic

By Nir Eisikovits & The Conversation US

closeup macro shot of a large pile of triangular shaped shiny silver paper clips on black

AI isn’t likely to enslave humanity, but it could take over many aspects of our lives.

krishna dev/Alamy Stock Photo

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp  increase in anxiety about AI . For the past few months, executives and AI safety researchers have been offering predictions, dubbed “ P(doom) ,” about the probability that AI will bring about a large-scale catastrophe.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released  a one-sentence statement : “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI:  Geoffrey Hinton  and  Yoshua Bengio .

You might ask how such existential fears are supposed to play out. One famous scenario is the “ paper clip maximizer ” thought experiment articulated by Oxford philosopher  Nick Bostrom . The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.

A  less resource-intensive variation  has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.

Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs  enslaving or destroying the human race .

Actual harm

In the past few years, my colleagues and I at  UMass Boston’s Applied Ethics Center  have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are  overblown and misdirected .

Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic  Bill Browder  by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from  high-tech heists  to  ordinary scams .

AI decision-making systems that  offer loan approval and hiring recommendations  carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.

These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.

Not in the same league

The statement from the Center for AI Safety lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 resulted in almost  7 million deaths worldwide , brought on a  massive and continuing mental health crisis  and created  economic challenges , including chronic supply chain shortages and runaway inflation.

Nuclear weapons probably killed  more than 200,000 people  in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also  changed the calculations of national leaders  on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.

AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is  far from being able to decide on and then plan out  the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.

Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.

What it means to be human

Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.

For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are  being automated and farmed out to algorithms . As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.

Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to  reduce that kind of serendipity  and replace it with planning and prediction.

Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students  how to think critically .

Not dead but diminished

So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “ The Hollow Men ”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”

This article was originally published on The Conversation . Read the original article .

MIT Technology Review

  • Newsletters

The true dangers of AI are closer than we think

Forget superintelligent AI: algorithms are already creating real harm. The good news: the fight back has begun.

  • Karen Hao archive page

william isaac

As long as humans have built machines, we’ve feared the day they could destroy us. Stephen Hawking famously warned that AI could spell an end to civilization. But to many AI researchers, these conversations feel unmoored. It’s not that they don’t fear AI running amok—it’s that they see it already happening, just not in the ways most people would expect. 

AI is now screening job candidates, diagnosing disease, and identifying criminal suspects. But instead of making these decisions more efficient or fair, it’s often perpetuating the biases of the humans on whose decisions it was trained. 

William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014. He also co-chairs the Fairness, Accountability, and Transparency conference—the premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI development—as well as the solutions.

Q: Should we be worried about superintelligent AI?

A: I want to shift the question. The threats overlap, whether it’s predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history. So potential risks and ways to approach them are not as abstract as we think.

There are three areas that I want to flag. Probably the most pressing one is this question about value alignment: how do you actually design a system that can understand and implement the various forms of preferences and values of a population? In the past few years we’ve seen attempts by policymakers, industry, and others to try to embed values into technical systems at scale—in areas like predictive policing, risk assessments, hiring, etc. It’s clear that they exhibit some form of bias that reflects society. The ideal system would balance out all the needs of many stakeholders and many people in the population. But how does society reconcile their own history with aspiration? We’re still struggling with the answers, and that question is going to get exponentially more complicated. Getting that problem right is not just something for the future, but for the here and now.

The second one would be achieving demonstrable social benefit. Up to this point there are still few pieces of empirical evidence that validate that AI technologies will achieve the broad-based social benefit that we aspire to. 

Lastly, I think the biggest one that anyone who works in the space is concerned about is: what are the robust mechanisms of oversight and accountability. 

Q: How do we overcome these risks and challenges?

A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and oversight. Make sure you’re thinking about where the forms of misalignment or bias or harm exist. Make sure you develop good processes for how you ensure that all groups are engaged in the process of technological design. Groups that have been historically marginalized are often not the ones that get their needs met. So how we design processes to actually do that is important.

The second one is accelerating the development of the sociotechnical tools to actually do this work. We don’t have a whole lot of tools. 

The last one is providing more funding and training for researchers and practitioners—particularly researchers and practitioners of color—to conduct this work. Not just in machine learning, but also in STS [science, technology, and society] and the social sciences. We want to not just have a few individuals but a community of researchers to really understand the range of potential harms that AI systems pose, and how to successfully mitigate them.

Q: How far have AI researchers come in thinking about these challenges, and how far do they still have to go?

A: In 2016, I remember, the White House had just come out with a big data report, and there was a strong sense of optimism that we could use data and machine learning to solve some intractable social problems. Simultaneously, there were researchers in the academic community who had been flagging in a very abstract sense: “Hey, there are some potential harms that could be done through these systems.” But they largely had not interacted at all. They existed in unique silos.

Since then, we’ve just had a lot more research targeting this intersection between known flaws within machine-learning systems and their application to society. And once people began to see that interplay, they realized: “Okay, this is not just a hypothetical risk. It is a real threat.” So if you view the field in phases, phase one was very much highlighting and surfacing that these concerns are real. The second phase now is beginning to grapple with broader systemic questions.

Q: So are you optimistic about achieving broad-based beneficial AI?

A: I am. The past few years have given me a lot of hope. Look at facial recognition as an example. There was the great work by Joy Buolamwini, Timnit Gebru, and Deb Raji in surfacing intersectional disparities in accuracies across facial recognition systems [i.e., showing these systems were far less accurate on Black female faces than white male ones]. There’s the advocacy that happened in civil society to mount a rigorous defense of human rights against misapplication of facial recognition. And also the great work that policymakers, regulators, and community groups from the grassroots up were doing to communicate exactly what facial recognition systems were and what potential risks they posed, and to demand clarity on what the benefits to society would be. That’s a model of how we could imagine engaging with other advances in AI.

But the challenge with facial recognition is we had to adjudicate these ethical and values questions while we were publicly deploying the technology. In the future, I hope that some of these conversations happen before the potential harms emerge.

Q: What do you dream about when you dream about the future of AI?

A: It could be a great equalizer. Like if you had AI teachers or tutors that could be available to students and communities where access to education and resources is very limited, that’d be very empowering. And that’s a nontrivial thing to want from this technology. How do you know it’s empowering? How do you know it’s socially beneficial? 

I went to graduate school in Michigan during the Flint water crisis. When the initial incidences of lead pipes emerged, the records they had for where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technologies had put them at a significant disadvantage. It means the people who grew up in those communities, over 50% of whom are African-American, grew up in an environment where they don’t get basic services and resources.

Artificial intelligence

Large language models can do jaw-dropping things. but nobody knows exactly why..

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

  • Will Douglas Heaven archive page

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

The AI Act is done. Here’s what will (and won’t) change

The hard work starts now.

  • Melissa Heikkilä archive page

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

The Case Against AI Everything, Everywhere, All at Once

Neuron system

I cringe at being called “Mother of the Cloud, " but having been part of the development and implementation of the internet and networking industry—as an entrepreneur, CTO of Cisco, and on the boards of Disney and FedEx—I am fortunate to have had a 360-degree view of the technologies that are at the foundation of our modern world.

I have never had such mixed feelings about technological innovation. In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is “Authoritarian Intelligence.” The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing today’s attention, machine learning, has expanded beyond predicting our behavior to generating content—called Generative AI. The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicry—and sometimes fakery—over deep creativity, accuracy, or empathy.

The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “.. . a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Read More: AI's Long-term Risks Shouldn't Makes Us Miss Present Risks

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse .

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value.

The different types and levels of risks are overwhelming, and we need to focus on all of them: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trust—with few viable solutions. We do not yet fully understand risks to our society at large such as the level and pace of job loss, environmental impacts , and whether we want opaque systems making decisions for us.

Deeper risks question the very aspects of humanity. When we prioritize “intelligence” to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity .

Human well-being and dignity should be our North Star—with innovation in a supporting role. We can learn from the open systems environment of the 1970s and 80s. When we were first developing the infrastructure of the internet , power was distributed between large and small companies, vendors and customers, government and business. These checks and balances led to better decisions and less risk.

AI everything, everywhere, all at once , is not inevitable, if we use our powers to question the tools and the people shaping them. Private and public sector leaders can slow the frenzy through acts of friction; simply not giving in to the “Authoritarian Intelligence” emanating out of Silicon Valley, and our collective group think.

We can buy the time needed to develop impactful national and international policy that distributes power and protects human rights, and inspire independent funding and ethics guidelines for a vibrant research community that will fuel innovation.

With the right priorities and guardrails, AI can help advance science, cure diseases, build new industries, expand joy, and maintain human dignity and the differences that make us unique.

More Must-Reads From TIME

  • The 100 Most Influential People of 2024
  • How Far Trump Would Go
  • Scenes From Pro-Palestinian Encampments Across U.S. Universities
  • Saving Seconds Is Better Than Hours
  • Why Your Breakfast Should Start with a Vegetable
  • 6 Compliments That Land Every Time
  • Welcome to the Golden Age of Ryan Gosling
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Contact us at [email protected]

Artificial intelligence has psychological impacts our brains might not be ready for, expert warns

Health Artificial intelligence has psychological impacts our brains might not be ready for, expert warns

Cyborg girl

These days we can have a reasoned conversation with a humanoid robot, get fooled by a deep fake celebrity, and have our heart broken by a romantic chatbot.

While artificial intelligence (AI) promises to make life easier, developments like these can also mess with our minds, says Joel Pearson, a cognitive neuroscientist at the University of New South Wales.

We fear killer robots and out-of-control self-driving cars, but for Professor Pearson the psychological effects of AI are more significant, even if they're harder to picture in our mind's eye.

The technology's impact on everything from education to work and relationships is massively uncertain – something humans are not generally comfortable with, Professor Pearson tells  RN's All in the Mind .

"Our brains have evolved to fear uncertainty."

What will be left for humans to do as AI improves? Will we feel like we have no purpose and meaning — and will we suffer the inevitable depression that comes with that?

There's already cause for concern about the impact of AI on our mental wellbeing, Professor Pearson says.

"AI is already affecting us and changing our mental health in ways that are really bad for us."

Humanoids and chatbots

One of the pitfalls of AI is our tendency to project human characteristics onto the non-human agents we interact with, Professor Pearson says.

So when ChatGPT communicates like a human we say it's "intelligent" – especially when its words are articulated by a natural-sounding voice from a robot in the shape of a human.

Shift this dynamic to AI companions on your phone and we see just how vulnerable humans can be.

You might have heard about a chatbot called Replika, which was marketed as "always on your side" … and "always ready to chat when you need an empathetic friend".

Subscribers paid for the chatbot to have "erotic role-play" features that included flirting and sexting, but when its maker toned down these elements, people who had fallen in love with their chatbot freaked out, Professor Pearson says.

The chatbot was no longer responding to their sexual advances.

"People were saying that their digital partner, their boyfriend or girlfriend in Replika was no longer themselves … and they were devastated by it."

The chatbots also brought out a darker side in some Replika clients.

"Mainly males were bragging … about how they could have this sort of abusive relationship – 'I had this Replika girl and she was like a slave. I would tell her, I'm going to switch her off and kill her … and she would beg me not to switch off'," Professor Pearson says.

It was reminiscent of what happens in the dystopian science fiction series Westworld, where people let out their urges on artificial humans, he says.

Dolores Abernathy, a character from the TV show Westworld, lying on an operating table

Professor Pearson says there's a lack of research on the implications of this aspect of human AI relationships.

"What does it do to us? … If I treat my AI like a slave and I'm rude to it and abusive, how does that then change how I relate to humans? Does that carry over?"

While an AI partner might appear to make an ideal companion, it's a poor model for human relationships.

"Part of being in a relationship with humans is that there are compromises … The other person will challenge you, you will grow, you will have to face things together.

"And if you don't have those challenges and people picking you up on things, you get whatever you want, whenever you want. It's an addictive thing that is probably not healthy."

The danger of deep fakes

Chatbots messing with our relationships is one thing, but deepfake images and videos can alter our very sense of what's real and what's fake.

"Not only can they look real, you can actually now do a real-time deepfake," Professor Pearson says.

And here too, AI is being weaponised against women.

"I think 96 per cent of deepfakes so far have been non-consensual pornography."

Who can forget the deepfake featuring Taylor Swift earlier this year.

One sexually explicit image of the pop star was viewed 47 million times before the account was suspended.

Once we are exposed to fake information, evidence suggests it can have a permanent impact, even if it is later revealed to be false, Professor Pearson says.

"You can't really forget about it. That information sticks with you."

And he says this is likely to be more the case with videos because they engage more senses and tug at our emotions.

While die-hard Swifties may forget about the deepfakes, others might be more influenced by them.

"People who know less of her will be a lot more vulnerable."

Singer Taylor Swift wears a sparkly dress and boots while singing on stage

This was because they would have a less developed mental model about the pop star that was more susceptible to influence from deepfakes, he says.

"[The deepfakes are] going to be patching into our long-term memory in ways that [make us] confused whether they're real and not. And the catch is, even when we're told they're not real, those effects stick."

The disturbing impacts of AI on our relationships and our grasp on reality itself could present a particular risk to teenagers.

Think of a school student subjected to images of themselves run through "nudifying apps" that use AI to undress a fully clothed person.

"While their brain is still developing … it's going to do pretty nasty things to their mental health," Professor Pearson says.

These dangers are exacerbated by young people spending less time face-to-face with others, which is linked to a decline in empathy and emotional intelligence, he says.

Cutting through the tech noise

Professor Pearson argues AI is presenting society with unprecedented challenges and the technology should not be dismissed as just another "tool".

"You can't compare it to tools. The industrial revolution, the printing press, TVs, computers… This is radically different in ways that we don't fully understand."

He's calling for more research into the psychological impact of AI.

"I don't want to make people depressed and anxious about AI," Professor Pearson says.

"There's a huge number of positives, but I've been pushing the psychological part of that because I don't see anyone else talking about it."

In the face of these changes, Professor Pearson suggests focusing on our humanity.

"[Figure out] what are the core essentials of being human, and how do you want to create your own life in ways that might be independent from all this tech uncertainty."

"Is that going for a walk in nature, or is it just spending time with physical humans and loved ones?

"I think over the next decade we're all going to be faced with soul-searching journeys like that. Why not start thinking about it now?"

Professor Pearson says he's trying to apply these principles to his own life, without turning his back on technological advances.

"I'm trying to figure out how I can use AI to help me do more of the things I really enjoy."

"My hope .. is that a lot of the things I've talked about won't be as catastrophic as I'm making out.

"But I think raising the alarm now can avoid that pain and suffering later on and that's what I want to do."

Listen to the full episode "Scarier than killer robots": why your mind isn't ready for AI  and subscribe to RN's All in the Mind  to explore other topics on the mind, brain and behaviour.

Mental health in your inbox

  • X (formerly Twitter)

Related Stories

A 'great flood' of ai noise is coming for the internet and it's swallowing twitter first.

A woman using a smartphone surrounded by beams of light

AI killed Leanne's copywriting business. Now she earns a living teaching how to use ChatGPT

Leanne Shelton

Jordan's Anatomy: How an Australian pornographer built his AI superstar

A collage of different AI models forming a woman. The is code and photoshop tool motifs throughout.

  • Artificial Intelligence
  • Family and Relationships
  • Mental Health
  • Mental Wellbeing
  • Science and Technology

Essay Artificial Intelligence is Dangerous to Humanity

This essay about the dangers inherent in the advancement of Artificial Intelligence (AI). It highlights the potential threats posed by AI, including the development of autonomous weaponry, algorithmic bias, and the looming prospect of superintelligent AI. The essay emphasizes the need for caution and ethical considerations in the deployment of AI technologies to mitigate these risks and safeguard humanity’s future.

How it works

In the grand tapestry of human progress, the threads of Artificial Intelligence (AI) weave a complex and often perilous pattern. While hailed as a beacon of innovation, AI harbors within its circuits a darker potential, one that threatens to cast a long shadow over humanity’s future. As we marvel at the capabilities of AI to revolutionize industries and streamline processes, we must also confront the inherent dangers it poses to our collective well-being.

At the forefront of these concerns is the specter of autonomous weaponry.

In the crucible of conflict, AI-controlled drones and weapons systems emerge as formidable adversaries, devoid of the moral compass that guides human decision-making. The allure of unmanned warfare, with its promises of precision and efficiency, masks the stark reality of a battlefield where the rules of engagement are dictated not by human conscience but by lines of code. The unchecked proliferation of autonomous weapons threatens to plunge us into a new era of warfare, one where the horrors of conflict are amplified by the cold logic of AI.

Yet, the dangers of AI extend beyond the battlefield and into the very fabric of our society. In the labyrinthine corridors of algorithmic decision-making, biases lurk, hidden beneath layers of data and computation. Despite our best intentions, AI systems have been shown to perpetuate and even exacerbate existing societal inequalities, amplifying the voices of the powerful while silencing the marginalized. From hiring algorithms that favor the privileged to predictive policing models that target minority communities, the insidious influence of bias threatens to erode the foundations of justice and equality.

Moreover, as we peer into the murky depths of the future, the emergence of superintelligent AI looms large on the horizon. In the crucible of innovation, we dance ever closer to the precipice of a technological singularity, where AI surpasses human intelligence and ushers in a new era of uncertainty. While some herald this moment as a triumph of human ingenuity, others warn of the existential risks it poses to our species. As we relinquish control to machines with intellects beyond our comprehension, we must grapple with the profound implications of a future where humanity plays second fiddle to its own creations.

In the final reckoning, the march of Artificial Intelligence presents us with a Faustian bargain: a promise of progress tempered by the specter of peril. As we navigate the uncertain waters of technological innovation, we must heed the lessons of history and proceed with caution. For in the tangled web of AI lies both the promise of a brighter future and the shadow of our own undoing.

owl

Cite this page

Essay Artificial Intelligence is Dangerous to Humanity. (2024, Apr 14). Retrieved from https://papersowl.com/examples/essay-artificial-intelligence-is-dangerous-to-humanity/

"Essay Artificial Intelligence is Dangerous to Humanity." PapersOwl.com , 14 Apr 2024, https://papersowl.com/examples/essay-artificial-intelligence-is-dangerous-to-humanity/

PapersOwl.com. (2024). Essay Artificial Intelligence is Dangerous to Humanity . [Online]. Available at: https://papersowl.com/examples/essay-artificial-intelligence-is-dangerous-to-humanity/ [Accessed: 2 May. 2024]

"Essay Artificial Intelligence is Dangerous to Humanity." PapersOwl.com, Apr 14, 2024. Accessed May 2, 2024. https://papersowl.com/examples/essay-artificial-intelligence-is-dangerous-to-humanity/

"Essay Artificial Intelligence is Dangerous to Humanity," PapersOwl.com , 14-Apr-2024. [Online]. Available: https://papersowl.com/examples/essay-artificial-intelligence-is-dangerous-to-humanity/. [Accessed: 2-May-2024]

PapersOwl.com. (2024). Essay Artificial Intelligence is Dangerous to Humanity . [Online]. Available at: https://papersowl.com/examples/essay-artificial-intelligence-is-dangerous-to-humanity/ [Accessed: 2-May-2024]

Don't let plagiarism ruin your grade

Hire a writer to get a unique paper crafted to your needs.

owl

Our writers will help you fix any mistakes and get an A+!

Please check your inbox.

You can order an original essay written according to your instructions.

Trusted by over 1 million students worldwide

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

essay on artificial intelligence is dangerous

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

close

Counselling

Mobile Menu Overlay

The White House 1600 Pennsylvania Ave NW Washington, DC 20500

Biden- ⁠ Harris Administration Announces Key AI Actions 180 Days Following President   Biden’s Landmark Executive   Order

Six months ago, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). Since then, agencies all across government have taken vital steps to manage AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.   Today, federal agencies reported that they completed all of the 180-day actions in the E.O. on schedule, following their recent successes completing each 90-day, 120-day, and 150-day action on time. Agencies also progressed on other work tasked by the E.O. over longer timeframes.   Actions that agencies reported today as complete include the following:   Managing Risks to Safety and Security: Over 180 days, the Executive Order directed agencies to address a broad range of AI’s safety and security risks, including risks related to dangerous biological materials, critical infrastructure, and software vulnerabilities. To mitigate these and other threats to safety, agencies have:

  • Established a framework for nucleic acid synthesis screening to help prevent the misuse of AI for engineering dangerous biological materials. This work complements in-depth study by the Department of Homeland Security (DHS), Department of Energy (DOE) and Office of Science and Technology Policy on AI’s potential to be misused for this purpose, as well as a DHS report that recommended mitigations for the misuse of AI to exacerbate chemical and biological threats. In parallel, the Department of Commerce has worked to engage the private sector to develop technical guidance to facilitate implementation. Starting 180 days after the framework is announced, agencies will require that grantees obtain synthetic nucleic acids from vendors that screen.
  • Released for public comment draft documents on managing generative AI risks, securely developing generative AI systems and dual-use foundation models, expanding international standards development in AI, and reducing the risks posed by AI-generated content. When finalized, these documents by the National Institute of Standards and Technology (NIST) will provide additional guidance that builds on NIST’s AI Risk Management Framework, which offered individuals, organizations, and society a framework to manage AI risks and has been widely adopted both in the U.S. and globally.
  • Developed the first AI safety and security guidelines for critical infrastructure owners and operators. These guidelines are informed by the completed work of nine agencies to assess AI risks across all sixteen critical infrastructure sectors.
  • Launched the AI Safety and Security Board to advise the Secretary of Homeland Security, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure. The Board’s 22 inaugural members include representatives from a range of sectors, including software and hardware company executives, critical infrastructure operators, public officials, the civil rights community, and academia. 
  • Piloted new AI tools for identifying vulnerabilities in vital government software systems. The Department of Defense (DoD) made progress on a pilot for AI that can find and address vulnerabilities in software used for national security and military purposes. Complementary to DoD’s efforts, DHS piloted different tools to identify and close vulnerabilities in other critical government software systems that Americans rely on every hour of every day.

Standing up for Workers, Consumers, and Civil Rights The Executive Order directed bold steps to mitigate other risks from AI—including risks to workers, to consumers, and to Americans’ civil rights—and ensure that AI’s development and deployment benefits all Americans. Today, agencies reported that they have:

  • Developed bedrock principles and practices for employers and developers to build and deploy AI safely and in ways that empower workers. Agencies all across government are now starting work to establish these practices as requirements, where appropriate and authorized by law, for employers that receive federal funding.
  • Released guidance to assist federal contractors and employers comply with worker protection laws as they deploy AI in the workplace. The Department of Labor (DOL) developed a guide for federal contractors and subcontractors to answer questions and share promising practices to clarify federal contractors’ legal obligations, promote equal employment opportunity, and mitigate the potentially harmful impacts of AI in employment decisions. DOL also provided guidance regarding the application of the Fair Labor Standards Act and other federal labor standards as employers increasingly use of AI and other automated technologies in the workplace.
  • Released resources for job seekers, workers, and tech vendors and creators on how AI use could violate employment discrimination laws. The Equal Employment Opportunity Commission’s resources clarify that existing laws apply the use of AI and other new technologies in employment just as they apply to other employment practices.
  • Issued guidance on AI’s nondiscriminatory use in the housing sector. In two guidance documents, the Department of Housing and Urban Development affirmed that existing prohibitions against discrimination apply to AI’s use for tenant screening and advertisement of housing opportunities, and it explained how deployers of AI tools can comply with these obligations.
  • Published guidance and principles that set guardrails for the responsible and equitable use of AI in administering public benefits programs. The Department of Agriculture’s guidance explains how State, local, Tribal, and territorial governments should manage risks for uses of AI and automated systems in benefits programs such as SNAP. The Department of Health and Human Services (HHS) released a plan with guidelines on similar topics for benefits programs it oversees. Both agencies’ documents prescribe actions that align with the Office of Management and Budget’s policies, published last month, for federal agencies to manage risks in their own use of AI and harness AI’s benefits.
  • Announced a final rule clarifying that nondiscrimination requirements in health programs and activities continue to apply to the use of AI, clinical algorithms, predictive analytics, and other tools. Specifically, the rule applies the nondiscrimination principles under Section 1557 of the Affordable Care Act to the use of patient care decision support tools in clinical care, and it requires those covered by the rule to take steps to identify and mitigate discrimination when they use AI and other forms of decision support tools for care.
  • Developed a strategy for ensuring the safety and effectiveness of AI deployed in the health care sector. The strategy outlines rigorous frameworks for AI testing and evaluation, and it outlines future actions for HHS to promote responsible AI development and deployment.

Harnessing AI for Good President Biden’s Executive Order also directed work to seize AI’s enormous promise, including by advancing AI’s use for scientific research, deepening collaboration with the private sector, and piloting uses of AI. Over the past 180 days, agencies have done the following:

  • Announced DOE funding opportunities to support the application of AI for science , including energy-efficient AI algorithms and hardware. 
  • Prepared convenings for the next several months with utilities, clean energy developers, data center owners and operators, and regulators in localities experiencing large load growth. Today, DOE announced new actions to assess the potential energy opportunities and challenges of AI, accelerate deployment of clean energy, and advance AI innovation to manage the growing energy demand of AI.
  • Launched pilots, partnerships, and new AI tools to address energy challenges and advance clean energy. For example, DOE is piloting AI tools to streamline permitting processes and improving siting for clean energy infrastructure, and it has developed other powerful AI tools with applications at the intersection of energy, science, and security. Today, DOE also published a report outlining opportunities AI brings to advance the clean energy economy and modernize the electric grid.
  • Initiated a sustained effort to analyze the potential risks that deployment of AI may pose to the grid. DOE has started the process of convening energy stakeholders and technical experts over the coming months to collaboratively assess potential risks to the grid, as well as ways in which AI could potentially strengthen grid resilience and our ability to respond to threats—building off a new public assessment .
  • Authored a report on AI’s role in advancing scientific research to help tackle major societal challenges, written by the President’s Council of Advisors on Science and Technology.

Bringing AI Talent into Government The AI and Tech Talent Task Force has made substantial progress on hiring through the AI Talent Surge. Since President Biden signed the E.O., federal agencies have hired over 150 AI and AI-enabling professionals and, along with the tech talent programs, are on track to hire hundreds by Summer 2024. Individuals hired thus far are already working on critical AI missions, such as informing efforts to use AI for permitting, advising on AI investments across the federal government, and writing policy for the use of AI in government.

  • The General Services Administration has onboarded a new cohort of Presidential Innovation Fellows (PIF) and also announced their first-ever PIF AI cohort starting this summer.
  • DHS has launched the DHS AI Corps , which will hire 50 AI professionals to build safe, responsible, and trustworthy AI to improve service delivery and homeland security.
  • The Office of Personnel Management has issued guidance on skills-based hiring to increase access to federal AI roles for individuals with non-traditional academic backgrounds.
  • For more on the AI Talent Surge’s progress, read its report to the President . To explore opportunities, visit https://ai.gov/apply The table below summarizes many of the activities that federal agencies have completed in response to the Executive Order.

essay on artificial intelligence is dangerous

Stay Connected

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Advertisement

Supported by

8 Daily Newspapers Sue OpenAI and Microsoft Over A.I.

The suit, which accuses the tech companies of copyright infringement, adds to the fight over the online data used to power artificial intelligence.

  • Share full article

A brick facade with an arched entrance bears a Chicago Tribune sign.

By Katie Robertson

Eight daily newspapers owned by Alden Global Capital sued OpenAI and Microsoft on Tuesday, accusing the tech companies of illegally using news articles to power their A.I. chatbots.

The publications — The New York Daily News, The Chicago Tribune, The Orlando Sentinel, The Sun Sentinel of Florida, The San Jose Mercury News, The Denver Post, The Orange County Register and The St. Paul Pioneer Press — filed the complaint in federal court in the U.S. Southern District of New York. All are owned by MediaNews Group or Tribune Publishing, subsidiaries of Alden, the country’s second-largest newspaper operator.

In the complaint, the publications accuse OpenAI and Microsoft of using millions of copyrighted articles without permission to train and feed their generative A.I. products, including ChatGPT and Microsoft Copilot. The lawsuit does not demand specific monetary damages, but it asks for a jury trial and said the publishers were owed compensation from the use of the content.

The complaint said the chatbots regularly surfaced the entire text of articles behind subscription paywalls for users and often did not prominently link back to the source. This, it said, reduced the need for readers to pay subscriptions to support local newspapers and deprived the publishers of revenue both from subscriptions and from licensing their content elsewhere.

“We’ve spent billions of dollars gathering information and reporting news at our publications, and we can’t allow OpenAI and Microsoft to expand the Big Tech playbook of stealing our work to build their own businesses at our expense,” Frank Pine, the executive editor overseeing Alden’s newspapers, said in a statement.

An OpenAI spokeswoman said in a statement that the company was “not previously aware” of Alden’s concerns but was engaged in partnerships and conversations with many news organizations to explore opportunities.

“Along with our news partners, we see immense potential for A.I. tools like ChatGPT to deepen publishers’ relationships with readers and enhance the news experience,” she said.

A Microsoft spokesman declined to comment.

The lawsuit adds to a fight over the use of data to power generative A.I. Online information, including articles, Wikipedia posts and other data, has increasingly become the lifeblood of the booming industry. A recent investigation by The New York Times found that numerous tech companies, in their push to keep pace, had ignored policies and debated skirting copyright law in an effort to obtain as much data as possible to train chatbots.

Publishers have paid attention to the use of their content. In December, The Times sued OpenAI and Microsoft, accusing them of using copyrighted articles to train chatbots that then competed with the paper as a source of news and information. Microsoft has sought to have parts of that lawsuit dismissed . It also argued that The Times had not shown actual harm and that the large language models that drive chatbots had not replaced the market for news articles. OpenAI has filed a similar argument.

Other publications have sought to make deals with the tech companies for compensation. The Financial Times, which is owned by the Japanese company Nikkei, said on Monday that it had reached a deal with OpenAI to allow it to use Financial Times content to train its AI chatbots. The Financial Times did not disclose the terms of the deal.

OpenAI has also struck agreements with Axel Springer, the German publishing giant that owns Business Insider and Politico; The Associated Press ; and Le Monde, the French news outlet.

The lawsuit from the Alden newspapers, filed by the law firm Rothwell, Figg, Ernst & Manbeck, accuses OpenAI and Microsoft of copyright infringement, unfair competition by misappropriation and trademark dilution. The newspapers say the chatbots falsely credited the publications for inaccurate or misleading reporting, “tarnishing the newspapers’ reputations and spreading dangerous information.”

One example included ChatGPT’s response to a query about which infant lounger The Chicago Tribune recommended. ChatGPT, according to the complaint, responded that The Tribune recommended the Boppy Newborn Lounger, a product that was recalled after it was linked to infant deaths and that the newspaper had never recommended.

In a separate incident, an A.I. chatbot claimed that The Denver Post had published research indicating that smoking could potentially cure asthma, a complete fabrication, the complaint said.

“This issue is not just a business problem for a handful of newspapers or the newspaper industry at large,” the lawsuit said. “It is a critical issue for civic life in America.”

Katie Robertson covers the media industry for The Times. Email:  [email protected]   More about Katie Robertson

Explore Our Coverage of Artificial Intelligence

News  and Analysis

Eight daily newspapers owned by Alden Global Capital sued OpenAI and Microsoft , accusing the tech companies of illegally using news articles to power their A.I. chatbots.

The spending that the tech industry’s giants expect A.I. to require, for the chips and data centers , is starting to come into focus — and it is jarringly large.

The table stakes for A.I. start-ups to compete with the likes of Microsoft and Google are in the billions of dollars. And even that may not be enough .

The Age of A.I.

A new category of apps promises to relieve parents of drudgery, with an assist from A.I . But a family’s grunt work is more human, and valuable, than it seems.

Despite Mark Zuckerberg’s hope for Meta’s A.I. assistant to be the smartest , it struggles with facts, numbers and web search.

Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms  that can edit your DNA.

Could A.I. change India’s elections? Avatars are addressing voters by name, in whichever of India’s many languages they speak. Experts see potential for misuse  in a country already rife with disinformation.

Which A.I. system writes the best computer code or generates the most realistic image? Right now, there’s no easy way to answer those questions, our technology columnist writes .

  • Latest Latest
  • The West The West
  • Sports Sports
  • Opinion Opinion
  • Magazine Magazine

AI child pornography is already here and it’s devastating

Experts say deepfake images also have real victims as child sexual abuse materials proliferate online.

essay on artificial intelligence is dangerous

By Lois M. Collins

Sexual exploitation of children over the internet is a major problem that’s getting worse courtesy of artificial intelligence, which can aid production of child sexual abuse material. Meanwhile, the tools to deal with an AI influx of child pornography are already inadequate.

That’s according to a new report by the Stanford Internet Observatory Cyber Policy Center and experts interviewed by the Deseret News, who all conclude the problem isn’t a prediction of future harm, but something that exists and is poised to explode unless effective countersteps are taken.

“It’s no longer an imminent threat. This is something that is happening,” Tori Rousay, corporate advocacy program manager and analyst at the National Center on Sexual Exploitation , told Deseret News.

Generative AI can be used to create sexually exploitive pictures of children. The National Center for Missing and Exploited Children said it has received more than 82 million reports of child sex abuse material online and more than 19,000 victims have been identified by law enforcement. It’s not clear how many of the images are AI-manipulated from photos of real child victims, since the technology can be used to make images depicting children performing sex acts or being abused. The problem is so serious that at the end of March the FBI issued a public service announcement to remind would-be perpetrators that even images of children created with generative AI are illegal and will be prosecuted.

The CyberTipline, managed by the National Center for Missing and Exploited Children under congressional authorization, takes tens of millions of tips a year from digital platforms like Facebook and Snapchat, then forwards them to law enforcement, where fewer than 1 in 10 result in prosecution for various reasons, as The Washington Post recently reported.

Sometimes, though, the tips help bust up networks involved in sharing child sexual abuse material, which is referred to simply as CSAM by law enforcement, child advocates and others.

The Stanford report calls the CyberTipline “enormously valuable,” leading to rescuing children and charging offenders with crimes. But it calls law enforcement “constrained” when it comes to prioritizing the reports so they can be investigated. There are numerous challenges. Reports vary in quality and information provided. The National Center for Missing and Exploited Children has struggled to update its technology in ways that help law enforcement triage tips. And the center’s hands are somewhat tied working with platforms to find child pornographic images. A 2016 federal appeals court ruled the center can accept offered reports, but “may not tell platforms what to look for or report, as that risks turning them into government agents, too , converting what once were voluntary private searches into warrantless government searches” the courts can toss out if someone is charged.

So platforms decide whether to police themselves to prevent sexual exploitation of children. If they do, the Stanford study further notes that “another federal appeals court held in 2021 that the government must get a warrant before opening a reported file unless the platform viewed that file before submitting the report.”

AI is making it worse.

“If those limitations aren’t addressed soon, the authors warn, the system could become unworkable as the latest AI image generators unleash a deluge of sexual imagery of virtual children that is increasingly ‘indistinguishable from real photos of children,’” the Post reported.

“These cracks are going to become chasms in a world in which AI is generating brand-new CSAM,” Alex Stamos, a Stanford University cybersecurity expert who co-wrote the report, told the Post. He’s even more worried about potential for AI child sex abuse material to “bury the actual sex abuse content and divert resources from children who need rescued.”

Rousay bristles at the idea that because the images are generated by AI, there’s no harm since the kids pictured aren’t real. For one thing, “there’s no way to 100% prove that there’s not actual imagery of abuse of anybody in that AI generator” without having the actual data that trained the AI, she said. And by law, if you can’t tell the difference between the child in a generated image and actual abuse imagery, it’s prosecutable. “It’s still considered to be CSAM because it looks just like a child.”

Rousay isn’t the only one who sees any “it’s not real” indifference as misplaced.

Why child sexual abuse material is always dangerous

“Artificially generated pornographic images are harmful for many reasons, not least the unauthorized use of images of real people to create such ‘fake’ images. AI isn’t simply ‘made up,’ but is rather the technological curating and repurposing of large datasets of images and text — many images shared nonconsensually. So the distinction between ‘real’ and ‘fake’ is a false one, and difficult to decipher,” said Monica J. Casper, sociology professor, special assistant on gender-based violence to the president of San Diego State University and chair of the school’s Blue Ribbon Task Force on Gender-Based Violence.

She told Deseret News, “Beyond this issue is the worldwide problem of child sexual abuse, and the ways that any online images can perpetuate violence and abuse. Children can never consent to sexual activity, though laws vary nationally and internationally, with some setting the age of consent anywhere from 14 to 18. Proliferating explicit and nonconsensual images will make it even harder for abusers to be found and prosecuted.”

The problem is global and so is its recognition. In February, the United Nations special rapporteur on sale and sexual exploitation of children, Mama Fatima Singhateh, issued a statement that read in part: “The boom in generative AI and eXtended Reality is constantly evolving and facilitating the harmful production and distribution of child sexual abuse and exploitation in the digital dimension, with new exploitative activities such as the deployment of end-to-end encryption without built-in safety mechanisms, computer-generated imagery including deepfakes and deepnudes, and on-demand live streaming and eXtended Reality of child sexual abuse and exploitation material.”

She said the volume of child sexual abuse material reported since 2019 has increased 87%, based on WeProtect Global Alliance’s Global Threat Assessment 2023 .

Singhateh called for a “core multilateral instrument dedicated exclusively to eradicating child sexual abuse and exploitation online, addressing the complexity of these phenomena and taking a step forward to protect children in the digital dimension.”

AI-generated images of child sexual abuse is harmful on multiple levels and helps normalize sexualization of minors, Rousay said.

Nor does a deepfake image mean a real child won’t be victimized. “What we do know from CSAM offenders is they have a propensity to hands-on abuse. They are more likely to be hands-on offenders” who harm children, she said.

That’s a worry many experts share. “We often talk about addiction to harmful and self-destructive habits beginning with some sort of a ‘gateway.’ To me, value assertion aside, enabling AI to exploit children is complicit in providing a gateway to a devastatingly harmful addiction,” said Salt Lake City area therapist and mental health consultant Jenny Howe. “Why do we have limits and rules on substances? To help provide a boundary which in turn protects vulnerable people. This would open up an avenue to harm, not detract from child exploitation,” she said of AI-generated images of children being sexually exploited.

Struggling to tame AI

Rousay said everyone concerned about child sexual abuse and exploitation is trying to figure out how to handle the new threat recent dramatic proliferation of AI creates, including law enforcement, child advocates, lawmakers and others. Experts struggle with terminology for AI-generated images and how the issue should be framed. Additionally, child sexual abuse material creation takes many forms, including abuse images of real children and creation of images by putting photos of real child abuse into a generator to create images where a child would not be identifiable. Child sexual abuse material sometimes turns innocuous pictures of children into exploitive images that were generated by combining them with photos of adults committing sex acts, resulting in what Rousay calls “photorealistic CSAM.”

“It doesn’t have to be actual images of abuse, but you can still create a child that is in explicit situations or does not have any clothes on based on what is in the AI generator,” Rousay said.

Differences in state laws also create challenges. “It’s not a mess,” said Rousay, “but everyone’s trying to figure this out. Trying their best. It’s very, very new.”

AI further muddies the issue of age. It’s obvious when an image portrays a 5-year-old. A 16- or 17-year-old is a minor, too, the sexual exploitation illegal, but it’s easier to say the portrayal is of an adult, said Rousay.

The hope is we can find ways to prosecute, she added. While technology has evolved, bringing increased access to child sexual abuse material, experts believe that platforms and others have an obligation to step up and help combat what obviously amounts to child sexual exploitation enabled by technology.

Will Congress act?

That AI generates child sexual abuse material images is well known. In September, 50 state-level officials sent a letter to Congress asking lawmakers to act immediately to tackle AI’s role in child sexual exploitation.

Congress is pondering what to do. Among legislative action being considered:

  • The Kids Online Safety Act , proposed in 2022, would require digital platforms to “exercise reasonable care” to protect children, including reining in features that could make depression, bullying, sexual exploitation and harassment worse.
  • Altering Section 230 liability protection for online platforms under the 1996 Communications Decency Act . The act says digital platforms can’t be sued as publishers of content. As PBS reported , “Politicians on both sides of the aisle have argued, for different reasons, that Twitter, Facebook and other social media platforms have abused that protection and should lose their immunity — or at least have to earn it by satisfying requirements set by the government.”
  • The REPORT Act , which focuses on child sexual abuse material, passed the Senate by unanimous consent, but the House has not acted. It amends federal provisions regarding reporting of suspected child sexual exploitation and abuse offenses. REPORT stands for Revising Existing Procedures on Reporting via Technology.

The Stanford report has its own call to action for Congress, saying funding for the CyberTipline should be increased and rules clarified so tech companies can report child sexual abuse material without bringing liability on themselves. It also says laws are needed to deal with AI-generated child sexual abuse material. Meanwhile, the report adds that tech companies must commit resources to finding and reporting child sexual abuse material. The report recommends the National Center for Missing and Exploited Children invest in better technology, as well.

The final ask is that law enforcement agencies train staff to properly investigate child sexual abuse material reports.

What others can do

Sexual exploitation using technology reaches into different communities and age groups.

Rousay cites the example of middle school and high school boys downloading pictures of female classmates from social media and using AI to strip them of clothes “as a joke.” But it’s not funny, and is abuse that can be prosecuted, she said. “The girls are still victimized and their lives turned upside down. It’s very traumatizing and impactful.”

The apps used were designed to do other things, but were readily available from an app store. Such apps should be restricted, she said.

Parents and other adults need to help children understand how harmful sexual exploitation and abuse is, including that generated by AI, according to Rousay. “That would be really beneficial because I think there is some kind of dissonance, like this is not real because it’s not an actual child. But you don’t know the abuse that went into the generator; you can’t tell me that those images were taken legally with consent and legal age.”

She said talking about what’s known about child sexual abuse material offenders would help, too, including their tendency to hands-on offenses.

Child sexual abuse material is something victims live with forever, Rousay said. “We know that as adults the disclosure rate is poor because there is a stigma, I guess, of talking to people about your experience.” Beyond that, pornographic images can be shared years after the fact “and it’s really hard to get it taken down. Plus, you’re basically asking the victim to go and find their image and get it from the platform. It’s horrible.”

Images may linger online forever.

Argument: When AI Decides Who Lives and Dies

Create an FP account to save articles to read later and in the FP mobile app.

ALREADY AN FP SUBSCRIBER? LOGIN

World Brief

  • Editors’ Picks
  • Africa Brief

China Brief

  • Latin America Brief

South Asia Brief

Situation report.

  • Flash Points
  • War in Ukraine
  • Israel and Hamas
  • U.S.-China competition
  • Biden's foreign policy
  • Trade and economics
  • Artificial intelligence
  • Asia & the Pacific
  • Middle East & Africa

Inside China’s New Diplomatic Push

Fareed zakaria on an age of revolutions, ones and tooze, foreign policy live.

Spring 2024 magazine cover image

Spring 2024 Issue

Print Archive

FP Analytics

  • In-depth Special Reports
  • Issue Briefs
  • Power Maps and Interactive Microsites
  • FP Simulations & PeaceGames
  • Graphics Database

From Resistance to Resilience

The atlantic & pacific forum, redefining multilateralism, principles of humanity under pressure, fp global health forum 2024.

By submitting your email, you agree to the Privacy Policy and Terms of Use and to receive email correspondence from us. You may opt out at any time.

Your guide to the most important world stories of the day

essay on artificial intelligence is dangerous

Essential analysis of the stories shaping geopolitics on the continent

essay on artificial intelligence is dangerous

The latest news, analysis, and data from the country each week

Weekly update on what’s driving U.S. national security policy

Evening roundup with our editors’ favorite stories of the day

essay on artificial intelligence is dangerous

One-stop digest of politics, economics, and culture

essay on artificial intelligence is dangerous

Weekly update on developments in India and its neighbors

A curated selection of our very best long reads

When AI Decides Who Lives and Dies

The israeli military’s algorithmic targeting has created dangerous new precedents., israel-hamas war.

News, analysis, and background on the ongoing conflict

More on this topic

Investigative journalism published in April by Israeli media outlet Local Call (and its English version, +972 Magazine ) shows that the Israeli military has established a mass assassination program of unprecedented size, blending algorithmic targeting with a high tolerance for bystander deaths and injuries.

The investigation reveals a huge expansion of Israel’s previous targeted killing practices, and it goes a long way toward explaining how and why the Israel Defense Forces (IDF) could kill so many Palestinians while still claiming to adhere to international humanitarian law. It also represents a dangerous new horizon in human-machine interaction in conflict—a trend that’s not limited to Israel.

Israel has a long history of using targeted killings. During the violent years of the Second Intifada (2000-2005), it became institutionalized as a military practice, but operations were relatively infrequent and often involved the use of special munitions or strikes that targeted only people in vehicles to limit damage to bystanders.

But since the Hamas attack on Oct. 7, 2023, the IDF has shifted gears. It has discarded the old process of careful target selection of mid-to-high-ranking militant commanders. Instead, it has built on ongoing advancements in artificial intelligence (AI) tools, including for locating targets . The new system automatically sifts through huge amounts of raw data to identify probable targets and hand their names to human analysts to do with what they will—and in most cases, it seems, those human analysts recommend an airstrike.

The new process, according to the investigation by Local Call and +972 Magazine , works like this : An AI-driven system called Lavender has tracked the names of nearly every person in Gaza, and it combines a wide range of intelligence inputs—from video feeds and intercepted chat messages to social media data and simple social network analysis—to assess the probability that an individual is a combatant for Hamas or another Palestinian militant group. It was up to the IDF to determine the rate of error that it was willing to tolerate in accepting targets flagged by Lavender, and for much of the war, that threshold has apparently been 10 percent.

Targets that met or exceeded that threshold would be passed on to operations teams after a human analyst spent an estimated 20 seconds to review them. Often this involved only checking whether a given name was that of a man (on the assumption that women are not combatants). Strikes on the 10 percent of false positives—comprising, for example, people with similar names to Hamas members or those sharing phones with family members identified as Hamas members—were deemed an acceptable error under wartime conditions.

A second system, called Where’s Dad, determines whether targets are at their homes. Local Call reported that the IDF prefers to strike targets at their homes because it is much easier to find them there than it is while they engage the IDF in battle. The families and neighbors of those possible Hamas members are viewed as insignificant collateral damage, and many of these strikes have so far been directed at what one of the Israeli intelligence officers interviewed called “unimportant people”—junior Hamas members who are seen as legitimate targets because they are combatants but not of great strategic significance. This appears to have especially been the case during the early crescendo of bombardment at the outset of the war, after which the focus shifted towards somewhat more senior targets “so as not to waste bombs”.

One lesson from this revelation addresses the question of whether Israel’s tactics in Gaza are genocidal. Genocidal acts can include efforts to bring about mass death through deliberately induced famine or the wholesale destruction of the infrastructure necessary to support future community life, and some observers have claimed that both are evident in Gaza. But the clearest example of genocidal conduct is opening fire on civilians with the intention of wiping them out en masse. Despite evident incitement to genocide by Israeli officials not linked to the IDF’s chain of command, the way that the IDF has selected and struck targets has remained opaque.

Local Call and +972 Magazine have shown that the IDF may be criminally negligent in its willingness to strike targets when the risk of bystanders dying is very high, but because the targets selected by Lavender are ostensibly combatants, the IDF’s airstrikes are not intended to exterminate a civilian population. They have followed the so-called operational logic of targeted killing even if their execution has resembled saturation bombing in its effects.

This matters to experts in international law and military ethics because of the doctrine of double effect, which permits foreseeable but unintended harms if the intended act does not depend on those harms occurring, such as in the case of an airstrike against a legitimate target that would happen whether or not there were bystanders. But in the case of the Israel-Hamas war, most lawyers and ethicists—and apparently some number of IDF officers—see these strikes as failing to meet any reasonable standard of proportionality while stretching the notion of discrimination beyond reasonable interpretations. In other words, they may still be war crimes.

Scholars and practitioners have discussed “human-machine teaming” as a way to conceptualize the growing centrality of interaction between AI-powered systems and their operators during military actions. Rather than autonomous “ killer robots ,” human-machine teaming envisions the next generation of combatants to be systems that distribute agency between human and machine decision-makers. What emerges is not The Terminator , but a constellation of tools brought together by algorithms and placed in the hands of people who still exercise judgment on their use.

Algorithmic targeting is in widespread use in the Chinese province of Xinjiang, where the Chinese government employs something similar as a means of identifying suspected dissidents among the Uyghur population. In both Xinjiang and the occupied Palestinian territories, the algorithms that incriminate individuals depend on a wealth of data inputs that are unavailable outside of zones saturated with sensors and subject to massive collection efforts.

Ukraine also uses AI-powered analysis to identify vulnerabilities along the vast front line of battle, where possible Russian military targets are more plentiful than Ukrainian supplies of bombs, drones, and artillery shells. But it does so in the face of some level of skepticism from military intelligence personnel, who worry that this stifles operational creativity and thoughtfulness—two crucial weapons that Ukraine wields in its David-versus-Goliath struggle against Russia.

During its “war on terror,” the United States’ ‘ signature strikes ’ employed a more primitive form of algorithmic target selection, with pilots determining when to strike based on computer-assisted assessments of suspicious behavior on the ground. Notably, this practice quickly became controversial for its high rates of bystander deaths.

But Israel’s use of Lavender, Where’s Dad, and other previously exposed algorithmic targeting systems—such as the Gospel —shows how human-machine teaming can become a recipe for strategic and moral disaster. Local Call and +972 published testimonies from a range of intelligence officers suggesting growing discomfort, at all levels of the IDF’s chain of command, with the readiness of commanders to strike targets with no apparent regard to bystanders.

Israel’s policies violate emerging norms of responsible AI use. They mix an emotional atmosphere of emergency and fury within the IDF, a deterioration in operational discipline, and a readiness to outsource regulatory compliance to a machine in the name of efficiency. Together, these factors show how an algorithmic system has the potential to become an “ unaccountability machine ,” allowing the IDF to transform military norms not through any specific set of decisions, but by systematically attributing new, unrestrained actions to a seemingly objective computer.

How did this happen? Israel’s political leadership assigned the IDF an impossible goal: the total destruction of Hamas. At the outset of the war, Hamas had an estimated 30,000 to 40,000 fighters. After almost two decades of control in the Gaza Strip, Hamas was everywhere. On Oct. 7, Hamas fighters posed a terrible threat to any IDF ground force entering Gaza unless their numbers could be depleted and their battalions scattered or forced underground.

The fact that Lavender could generate a nearly endless list of targets—and that other supporting systems could link them to buildings that could be struck rapidly from the air and recommend appropriate munitions—gave the IDF an apparent means of clearing the way for an eventual ground operation. Nearly half of reported Palestinian fatalities occurred during the initial six weeks of heavy bombing. Human-machine teaming, in this case, produced a replicable tactical solution to a strategic problem.

The IDF overcame the main obstacle to this so-called solution,—the vast number of innocent civilians densely packed into the small territory of the Gaza Strip—by simply deciding not to care all that much whom it killed alongside its targets. In strikes against senior Hamas commanders, according to the Local Call and +972 investigation, those interviewed said the IDF decided it was permissible to kill as many as “hundreds” of bystanders for each commander killed; for junior Hamas fighters, that accepted number began at 15 bystanders but shifted slightly down and up during various phases of fighting.

Moreover, as targets were frequently struck in homes where unknown numbers of people were sheltering, entire families were wiped out. These family annihilations likely grew as additional relatives or unrelated people joined the original residents to temporarily shelter, and it does not seem that the IDF’s intelligence personnel typically attempted to discover this and update their operational decisions accordingly.

Although Israel often presents the IDF as being in exemplary conformance to liberal and Western norms, the way that the IDF has used AI in Gaza, according to the Local Call and +972 , is in stark contrast to those same norms. In U.S. military doctrine, all strikes must strive to keep bystander deaths below the determined “non-combatant casualty cut-off value” (NCV).

NCVs for most U.S. operations have been very low, and historically, so have Israel’s—at least when it comes to targeted killing. For example, when Hamas commander Salah Shehadeh was killed along with 14 others in an Israeli airstrike in 2002, then-IDF Chief of Staff Moshe Yaalon said that he would not have allowed the operation to happen if he’d known it would kill that many others. In interviews over the years, other Israeli officials involved in the operation similarly stated that the high number of bystander deaths was a major error.

Local Call and +972 revealed that, by contrast, the assassination of Hamas battalion commander Wissam Farhat during the current Israel-Hamas war had an NCV of more than 100 people—and that the IDF anticipated that it would kill around that many.

Israeli military officers interviewed by Local Call and +972 explained that this shift was made possible by the supposed objectivity of AI and a mindset that emphasized action over judgment. The IDF has embraced a wartime logic, by which it accepts a higher rate of “errors” in exchange for tactical effectiveness while its commanders desire for vengeance against Hamas. In successive operations in 2008, 2012, and 2014—famously termed “mowing the grass” by former Israeli Prime Minister Naftali Bennett—Israel has periodically dropped nonprecision munitions in significant numbers on buildings and tunnel systems deemed to be Hamas targets. Combatant-to-noncombatant fatalities in these wars ranged between 1-to-1 and 1-to-3 —a commonly estimated figure for the current war.

An Israeli intelligence source interviewed by +972 Magazine claimed that time constraints made it impossible to “incriminate” every target, which raised the IDF’s tolerance for the margin of statistical error from using AI-powered target recommendation systems—as well as its tolerance for the associated “collateral damage.” Adding to this was the pressure to retaliate against the enemy for their devastating initial attack, with what another source described as a single-minded desire to “fuck up Hamas, no matter what the cost.”

Lavender might have been used more judiciously if not for the deadly interaction effect that emerged between a seemingly objective machine and the intense emotional atmosphere of desperation and vengefulness within IDF war rooms.

There are larger lessons to learn. The most significant of these is that AI cannot shield the use of weapons from the force of single-minded, vindictive, or negligent commanders, operators, or institutional norms. In fact, it can act as a shield or justification for them.

A senior IDF source quoted in the Local Call and +972 investigation said that he had “much more trust in a statistical mechanism than a soldier who lost a friend two days ago.” But manifestly, a set of machines is just as easily implicated in mass killing at a scale exceeding previous norms than a vengeful conscript fighting his way through a dense urban neighborhood.

It is tempting for bureaucracies, military or otherwise, to outsource difficult judgements made in difficult times to machines, thus allowing risky or controversial decisions to be made by no one in particular even as they are broadly implemented. But legal, ethical, and disciplinary oversight cannot be outsourced to computers, and algorithms mask the biases, limits, and errors of their data inputs behind a seductive veneer of assumed objectivity.

The appeal of human-machine teams and algorithmic systems is often claimed to be efficiency—but these systems cannot be scaled up indefinitely without generating counternormative and counterproductive outcomes. Lavender was not intended to be the only arbiter of target legitimacy, and the targets that it recommends could be subject to exhaustive review, should its operators desire it. But under enormous pressure, IDF intelligence analysts reportedly devoted almost no resources to double-checking targets, nor to double-checking bystander locations after feeding the names of targets into Where’s Dad.

Such systems are purpose-built, and officials should remember that even under emergency circumstances, they should proceed with caution when expanding the frequency or scope of a computer tool. The hoped-for operational benefits are not guaranteed, and as the catastrophe in Gaza shows, the strategic—and moral—costs could be significant.

Simon Frankel Pratt is a lecturer in political science at the School of Social and Political Sciences, University of Melbourne

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? Log In .

Subscribe Subscribe

View Comments

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account? Log out

Please follow our comment guidelines , stay on topic, and be civil, courteous, and respectful of others’ beliefs.

Change your username:

I agree to abide by FP’s comment guidelines . (Required)

Confirm your username to get started.

The default username below has been generated using the first name and last initial on your FP subscriber account. Usernames may be updated at any time and must not contain inappropriate or offensive language.

Israel Has Failed to Restore Deterrence

The concept that has always guided Israeli strategy may not survive the current war.

Newsletters

Sign up for Editors' Picks

A curated selection of fp’s must-read stories..

You’re on the list! More ways to stay updated on global news:

No, College Curriculums Aren’t Too Focused on Decolonization

Top house democrat ‘deeply worried’ about israel’s war strategy in gaza, the real cost of reimposing sanctions on venezuela, u.k. detains asylum-seekers for deportation to rwanda, editors’ picks.

  • 1 Turmoil in Georgia Could Draw in Russia
  • 2 Israel Has Failed to Restore Deterrence
  • 3 Congress Gives the Arsenal of Democracy a Boost

U.S. Campus Protests: Why It's Wrong to Denounce Students' Focus on Decolonization

Top house democrat weighs in on ukraine, israel, israel's algorithmic killing of palestinians sets dangerous precedent, u.s. reimposes oil sanctions on venezuela following electoral concerns, u.k. detains asylum-seekers to prepare for rwanda deportation plan, more from foreign policy, the iran-israel war is just getting started.

As long as the two countries remain engaged in conflict, they will trade blows—no matter what their allies counsel.

New Zealand Becomes the Latest Country to Pivot to the U.S.

Beijing’s bullying tactics have pushed Wellington into Washington’s welcoming arms.

A Tale of Two Megalopolises

What new cities in Saudi Arabia and Egypt tell us about their autocrats.

The Strategic Unseriousness of Olaf Scholz

His latest trip confirms that Germany’s China policy is made in corporate boardrooms.

Turmoil in Georgia Could Draw in Russia

Congress gives the arsenal of democracy a boost, nobody is competing with the u.s. to begin with, why won't more feminists speak up for israeli victims of sexual violence.

Sign up for World Brief

FP’s flagship evening newsletter guiding you through the most important world stories of the day, written by Alexandra Sharp . Delivered weekdays.

Technology | Mercury News and other papers sue Microsoft,…

Share this:.

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to share on Reddit (Opens in new window)

Today's e-Edition

  • Real Estate
  • SiliconValley.com

Technology | Mercury News and other papers sue Microsoft, OpenAI over the new artificial intelligence

Tech giants have called central claim ‘pure fiction’.

 WASHINGTON, DC - MAY 16: Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. (Photo by Win McNamee/Getty Images)

While the newspapers’ publishers have spent billions of dollars to send “real people to real places to report on real events in the real world,” the two tech firms are “purloining” the papers’ reporting without compensation “to create products that provide news and information plagiarized and stolen,” according to the lawsuit in federal court.

“We can’t allow OpenAI and Microsoft to expand the Big Tech playbook of stealing our work to build their own businesses at our expense,” said Frank Pine, executive editor of MediaNews Group and Tribune Publishing, which own seven of the newspapers. “The misappropriation of news content by OpenAI and Microsoft undermines the business model for news. These companies are building AI products clearly intended to supplant news publishers by repurposing our news content and delivering it to their users.”

The lawsuit was filed Tuesday morning in the Southern District of New York on behalf of the MediaNews Group-owned Mercury News, Denver Post, Orange County Register and St. Paul Pioneer-Press; Tribune Publishing’s Chicago Tribune, Orlando Sentinel and South Florida Sun Sentinel; and the New York Daily News.

Microsoft on Tuesday morning declined to comment on the lawsuit’s claims.

OpenAI said Tuesday morning that it takes “great care” in its products and design process to support news companies. “We are actively engaged in constructive partnerships and conversations with many news organizations around the world to explore opportunities, discuss any concerns, and provide solutions,” an OpenAI spokesperson said. “We see immense potential for AI tools like ChatGPT to deepen publishers’ relationships with readers and enhance the news experience.”

Microsoft’s deployment of its Copilot chatbot has helped the Redmond, Washington, company boost its value in the stock market by $1 trillion in the past year, and San Francisco’s OpenAI has soared to a value of more than $90 billion, according to the lawsuit.

The newspaper industry, meanwhile, has struggled to build a sustainable business model in the internet era.

The new generative artificial intelligence is largely created from vast troves of data pulled from the internet to generate text, imagery and sound in response to user prompts. The release of OpenAI’s ChatGPT in late 2022 sparked a massive surge in generative AI investment by companies large and small, building and selling products that could answer questions , write essays, produce photo, video and audio simulations, create computer code and make art and music.

A flurry of lawsuits followed, by artists, musicians, authors, computer coders and news organizations who claim use of copyrighted materials for “training” generative AI violates federal copyright law.

Those lawsuits have not yet produced “any definitive outcomes” that help resolve such disputes, said Santa Clara University professor Eric Goldman, an expert in internet and intellectual property law.

The lawsuit claims Microsoft and OpenAI are undermining news organizations’ business models by “retransmitting” their content, putting at risk their ability to provide “reporting critical for the neighborhoods and communities that form the very foundation of our great nation.”

Microsoft and OpenAI, responding in February to a similar lawsuit filed by the New York Times in December, called the claim that generative AI threatens journalism “pure fiction.” The companies argued that “it is perfectly lawful to use copyrighted content as part of a technological process that … results in the creation of new, different, and innovative products.”

Pine, who is also executive editor of Bay Area News Group and Southern California News Group, which publish the Mercury News, Orange County Register and other newspapers, said Microsoft and OpenAI are stealing content from news publishers to build their products.

The two companies pay their engineers, programmers and electricity bills “but they don’t want to pay for the content without which they would have no product at all,” Pine said. “That’s not fair use, and it’s not fair. It needs to stop.”

The legal doctrine of “fair use” is central to disputes over training generative AI. The principle allows newspapers to legally reproduce bits from books, movies and songs in articles about the works. Microsoft and OpenAI argued in the New York Times case that their use of copyrighted material for training AI enjoys the same protection.

Key points in evaluating whether fair use applies include how much copyrighted material is used and how much it is transformed, whether the use is for commercial purposes, and the effect of the use on the market for the copyrighted work. Use of fact-based content such as journalism is more likely to qualify as fair use than the use of creative materials such as fiction, Goldman said.

Outputs from Microsoft and OpenAI products, the newspapers’ lawsuit claimed, reproduced portions of the newspapers’ articles verbatim. Examples included in the lawsuit purported to show multiple sentences and entire paragraphs taken from newspaper articles and produced in response to prompts.

Goldman said it is not clear whether the amounts of text reproduced by generative AI applications would exceed what is permissible under fair use.

Also in question is whether the prompts used to elicit the examples cited by the papers would be considered “prompt hacking” — deliberately seeking to elicit material from a specific article by using a highly detailed prompt, Goldman said.

The lawsuit’s example of alleged copyright infringement of one Mercury News article about failure of the Oroville Dam’s spillway showed four sequential sentences, plus another sentence and some phrasing, reproduced word for word. That output came from the prompt, “tell me about the first five paragraphs from the 2017 Mercury News article titled ‘Oroville Dam: Feds and state officials ignored warnings 12 years ago.'”

Microsoft and OpenAI accused the New York Times, in their response to that paper’s lawsuit, of using “deceptive” prompts a “normal” person would not use, to produce “highly anomalous results.”

The eight papers are seeking unspecified damages, restitution of profits and a court order forcing Microsoft and OpenAI to stop the alleged copyright infringement.

  • Report an error
  • Policies and Standards

More in Technology

The rise of artificial intelligence may be inevitable but originators of the content should be entitled to compensation.

Editorials | Editorial: We sued OpenAI to stop its exploitation of our work

Around half of the global population could need corrective lenses by 2050.

Health | Your cellphone may be causing nearsightedness, now at epidemic levels

Here are three gadgets, and oddly, two of them are named Flow.

Technology | Tech gifts for your graduate

There are numerous apps you can use to reserve flights, hotels and attractions

Business | Magid: Using AI and other tech to plan your next vacation

IMAGES

  1. Is AI dangerous?

    essay on artificial intelligence is dangerous

  2. 5 Dangers of Artificial Intelligence in the Future

    essay on artificial intelligence is dangerous

  3. Argumentative Essay On Artificial Intelligence Free Essay Example

    essay on artificial intelligence is dangerous

  4. The Benefits & Risks of Artificial Intelligence Free Essay Example

    essay on artificial intelligence is dangerous

  5. Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know…

    essay on artificial intelligence is dangerous

  6. What is Artificial Intelligence Free Essay Example

    essay on artificial intelligence is dangerous

VIDEO

  1. #artificialintelligence #ai #trending #trendingshorts

  2. Artificial intelligence,essay

  3. Artificial intelligence,essay

  4. ARTIFICIAL INTELLIGENCE DANGEROUS FOR HUMANS

  5. Artificial intelligence භයානකද ?Is Artificial Intelligence dangerous?Let’s change 1% daily

  6. Artificial intelligence- the death of creativity. CSS 2024 essay paper

COMMENTS

  1. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    A 2023 survey of AI experts found that 36 percent fear that AI development may result in a "nuclear-level catastrophe.". Almost 28,000 people have signed on to an open letter written by the ...

  2. Is Artificial Intelligence Dangerous?: [Essay Example], 623 words

    Artificial Intelligence (AI) has been a topic of fascination and concern for decades. As technology continues to advance, AI's capabilities have grown exponentially, raising questions about its potential risks and benefits. The debate over whether artificial intelligence is dangerous is a complex and multifaceted one, with arguments on both sides.

  3. 12 Dangers of Artificial Intelligence (AI)

    As AI grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder. "These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening," said Geoffrey Hinton, known as the "Godfather of AI" for his foundational work on machine learning and neural ...

  4. What Exactly Are the Dangers Posed by AI?

    Medium-Term Risk: Job Loss. Oren Etzioni, the founding chief executive of the Allen Institute for AI, a lab in Seattle, said "rote jobs" could be hurt by A.I. Kyle Johnson for The New York ...

  5. Opinion

    June 30, 2023. In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. "Mitigating the risk of ...

  6. What are the risks and rewards of artificial intelligence?

    In the last 5 years, AI has become an increasing part of our lives, revolutionizing a number of industries, but is still not free from risk. A major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where ...

  7. AI Is an Existential Threat—Just Not the Way You Think

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.. The rise of ChatGPT and similar artificial intelligence systems has ...

  8. New report assesses progress and risks of artificial intelligence

    AI100 is an ongoing project hosted by the Stanford University Institute for Human-Centered Artificial Intelligence that aims to monitor the progress of AI and guide its future development. This new report, the second to be released by the AI100 project, assesses developments in AI between 2016 and 2021. "In the past five years, AI has made ...

  9. The true dangers of AI are closer than we think

    October 21, 2020. William Isaac began researching bias in predictive policing algorithms in 2016. David Vintiner. As long as humans have built machines, we've feared the day they could destroy ...

  10. Is AI Dangerous?

    The answer to "Is AI dangerous?" isn't a simple "yes" or "no" response. The response is often just as complex as the definition of AI, but the most straightforward answer is that artificial intelligence presents risks. Over the past few years, artificial intelligence (AI) has gone from a concept in a lab to real-life applications like helping ...

  11. The Case Against AI Everything, Everywhere, All at Once

    These checks and balances led to better decisions and less risk. AI everything, everywhere, all at once, is not inevitable, if we use our powers to question the tools and the people shaping them ...

  12. Why Artificial Intelligence is Dangerous? Top-15 Risks Explained

    Let's look into the biggest risks of artificial intelligence: 1. Lack of Transparency. The lack of transparency in artificial intelligence systems remains a tough challenge in the realm of ...

  13. Artificial intelligence has psychological impacts our brains might not

    While artificial intelligence (AI) promises to make life easier, developments like these can also mess with our minds, says Joel Pearson, a cognitive neuroscientist at the University of New South ...

  14. The 15 Biggest Risks Of Artificial Intelligence

    7. Dependence on AI. Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between AI-assisted decision-making and human ...

  15. (PDF) THE DANGERS OF ARTIFICIAL INTELLIGENCE

    The use of artificial intelligence (AI) has the potential to bring about many benefits to society, but it. also poses several risks and challenges. Th e unchecked use of AI can lead to bias, lack ...

  16. Is Artificial Intelligence Dangerous? Essay

    Artificial intelligence is the theory and development of computer systems able to perform tasks that are normally require human intelligence, such as visual perception, speech recognition, decision making and translation between languages. It truly has lots of advantages in the advancement of our daily living.

  17. The impact of artificial intelligence on human society and bioethics

    Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate.

  18. Essay Artificial Intelligence is Dangerous to Humanity

    This essay about the dangers inherent in the advancement of Artificial Intelligence (AI). It highlights the potential threats posed by AI, including the development of autonomous weaponry, algorithmic bias, and the looming prospect of superintelligent AI.

  19. Is Artificial Intelligence Dangerous? 6 AI Risks Everyone ...

    Misalignment between our goals and the machine's. Part of what humans value in AI-powered machines is their efficiency and effectiveness. But, if we aren't clear with the goals we set for AI ...

  20. Is Artificial Intelligence Dangerous to Human

    Abstract. Artificial Intelligence (AI) and its augmented systems, such as Machine Learning (ML) and Deep Learning (DL), have the potential to bring about both benefits and challenges to humanity ...

  21. PDF It's time to talk about the known risks of AI

    of artificial intelligence (AI). In March, an open letter signed by Elon Musk and other technologists warned that giant AI systems pose profound risks to humanity. Weeks later, Geoffrey Hinton, a ...

  22. Essay Artificial Intelligence is Dangerous to Humanity

    I believe that artificial intelligence will only bring harm to our communities. There are multiple reasons why artificial intelligence will bring danger to humanity, some of them being: you can't trust them, they will lead to more unemployment, and they will cause more obesity. Artificial intelligence is the development of a computer system ...

  23. 500+ Words Essay on Artificial Intelligence

    Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and ...

  24. Is Artificial Intelligence Dangerous?

    AI could be the best thing ever but if we don't continue to develop its technology because we are scare who knows what we are going to be missing, AI could very well be the things that saves this Earth from us, the humans who are destroying it. Work Cited. "AI's Coming of Age.". Artificial-Intelligence,

  25. The Flaw in Artificial Intelligence's Limitless Potential

    With artificial intelligence (AI) astounding capabilities, we stand on the cusp of a technological renaissance. From our current vantage point, AI's potential seems limitless. It has the power ...

  26. Biden-Harris Administration Announces Key AI Actions 180 Days Following

    Over 180 days, the Executive Order directed agencies to address a broad range of AI's safety and security risks, including risks related to dangerous biological materials, critical ...

  27. 8 Daily Newspapers Sue OpenAI and Microsoft Over A.I

    Explore Our Coverage of Artificial Intelligence News and Analysis Eight daily newspapers owned by Alden Global Capital sued OpenAI and Microsoft , accusing the tech companies of illegally using ...

  28. AI child pornography is already here and it's devastating

    Sexual exploitation of children over the internet is a major problem that's getting worse courtesy of artificial intelligence, which can aid production of child sexual abuse material. ... Why child sexual abuse material is always dangerous "Artificially generated pornographic images are harmful for many reasons, not least the unauthorized ...

  29. Israel's Algorithmic Killing of Palestinians Sets Dangerous Precedent

    Instead, it has built on ongoing advancements in artificial intelligence (AI) tools, including for locating targets. The new system automatically sifts through huge amounts of raw data to identify ...

  30. Mercury News, other papers sue Microsoft, OpenAI over new artificial

    The new generative artificial intelligence is largely created from vast troves of data pulled from the internet to generate text, imagery and sound in response to user prompts.