• Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial
  • Statistical Machine Translation of Languages in Artificial Intelligence
  • Breadth-first Search is a special case of Uniform-cost search
  • Artificial Intelligence - Boon or Bane
  • Stochastic Games in Artificial Intelligence
  • Resolution Algorithm in Artificial Intelligence
  • Types of Environments in AI
  • PEAS Description of Task Environment
  • Optimal Decision Making in Multiplayer Games
  • Game Theory in AI
  • Emergence Of Artificial Intelligence
  • Propositional Logic based Agent
  • GPT-3 : Next AI Revolution
  • Advantages and Disadvantage of Artificial Intelligence
  • Understanding PEAS in Artificial Intelligence
  • Sparse Rewards in Reinforcement Learning
  • Propositional Logic Hybrid Agent and Logical State
  • Prepositional Logic Inferences
  • Linguistic variable And Linguistic hedges
  • Knowledge based agents in AI

Problem Solving in Artificial Intelligence

The reflex agent of AI directly maps states into action. Whenever these agents fail to operate in an environment where the state of mapping is too large and not easily performed by the agent, then the stated problem dissolves and sent to a problem-solving domain which breaks the large stored problem into the smaller storage area and resolves one by one. The final integrated action will be the desired outcomes.

On the basis of the problem and their working domain, different types of problem-solving agent defined and use at an atomic level without any internal state visible with a problem-solving algorithm. The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem.  

We can also say that a problem-solving agent is a result-driven agent and always focuses on satisfying the goals.

There are basically three types of problem in artificial intelligence:

1. Ignorable: In which solution steps can be ignored.

2. Recoverable: In which solution steps can be undone.

3. Irrecoverable: Solution steps cannot be undo.

Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and their activities. So we need a number of finite steps to solve a problem which makes human easy works.

These are the following steps which require to solve a problem :

  • Problem definition: Detailed specification of inputs and acceptable system solutions.
  • Problem analysis: Analyse the problem thoroughly.
  • Knowledge Representation: collect detailed information about the problem and define all possible techniques.
  • Problem-solving: Selection of best techniques.

Components to formulate the associated problem: 

  • Initial State: This state requires an initial state for the problem which starts the AI agent towards a specified goal. In this state new methods also initialize problem domain solving by a specific class.
  • Action: This stage of problem formulation works with function with a specific class taken from the initial state and all possible actions done in this stage.
  • Transition: This stage of problem formulation integrates the actual action done by the previous action stage and collects the final stage to forward it to their next stage.
  • Goal test: This stage determines that the specified goal achieved by the integrated transition model or not, whenever the goal achieves stop the action and forward into the next stage to determines the cost to achieve the goal.  
  • Path costing: This component of problem-solving numerical assigned what will be the cost to achieve the goal. It requires all hardware software and human working cost.

Please Login to comment...

Similar reads.

author

  • Artificial Intelligence
  • 10 Best Slack Integrations to Enhance Your Team's Productivity
  • 10 Best Zendesk Alternatives and Competitors
  • 10 Best Trello Power-Ups for Maximizing Project Management
  • Google Rolls Out Gemini In Android Studio For Coding Assistance
  • 30 OOPs Interview Questions and Answers (2024)

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

IncludeHelp_logo

  • Data Structure
  • Coding Problems
  • C Interview Programs
  • C++ Aptitude
  • Java Aptitude
  • C# Aptitude
  • PHP Aptitude
  • Linux Aptitude
  • DBMS Aptitude
  • Networking Aptitude
  • AI Aptitude
  • MIS Executive
  • Web Technologie MCQs
  • CS Subjects MCQs
  • Databases MCQs
  • Programming MCQs
  • Testing Software MCQs
  • Digital Mktg Subjects MCQs
  • Cloud Computing S/W MCQs
  • Engineering Subjects MCQs
  • Commerce MCQs
  • More MCQs...
  • Machine Learning/AI
  • Operating System
  • Computer Network
  • Software Engineering
  • Discrete Mathematics
  • Digital Electronics
  • Data Mining
  • Embedded Systems
  • Cryptography
  • CS Fundamental
  • More Tutorials...
  • Tech Articles
  • Code Examples
  • Programmer's Calculator
  • XML Sitemap Generator
  • Tools & Generators

IncludeHelp

Home » Machine Learning/Artificial Intelligence

Problem Solving in Artificial Intelligence

In this tutorial, you will study about the problem-solving approach in Artificial Intelligence. You will learn how an agent tackles the problem and what steps are involved in solving it? By Monika Sharma Last updated : April 12, 2023

Problem Solving in AI

The aim of Artificial Intelligence is to develop a system which can solve the various problems on its own. But the challenge is, to understand a problem, a system must predict and convert the problem in its understandable form. That is, when an agent confronts a problem, it should first sense the problem, and this information that the agent gets through the sensing should be converted into machine-understandable form. For this, a particular sequence should be followed by the agent in which a particular format for the representation of agent's knowledge is defined and each time a problem arises, the agent can follow that particular approach to find a solution to it .

Types of Problems in AI

The types of problems in artificial intelligence are:

1. Ignorable Problems

In ignorable problems, the solution steps can be ignored.

2. Recoverable Problems

In recoverable problems, the solution steps which you have already implemented can be undone.

3. Irrecoverable Problems

In irrecoverable problems, the solution steps which you have already implemented cannot be undone.

Steps for Problem Solving in AI

The steps involved in solving a problem (by an agent based on Artificial Intelligence ) are:

1. Define a problem

Whenever a problem arises, the agent must first define a problem to an extent so that a particular state space can be represented through it. Analyzing and defining the problem is a very important step because if the problem is understood something which is different than the actual problem, then the whole problem-solving process by the agent is of no use.

2. Form the state space

Convert the problem statement into state space. A state space is the collection of all the possible valid states that an agent can reside in. But here, all the possible states are chosen which can exist according to the current problem. The rest are ignored while dealing with this particular problem.

3. Gather knowledge

collect and isolate the knowledge which is required by the agent to solve the current problem. This knowledge gathering is done from both the pre-embedded knowledge in the system and the knowledge it has gathered through the past experiences in solving the same type of problem earlier.

4. Planning-(Decide data structure and control strategy)

A problem may not always be an isolated problem. It may contain various related problems as well or some related areas where the decision made with respect to the current problem can affect those areas. So, a well-suited data structure and a relevant control strategy must be decided before attempting to solve the problem.

5. Applying and executing

After all the gathering of knowledge and planning the strategies, the knowledge should be applied and the plans should be executed in a systematic way so s to reach the goal state in the most efficient and fruitful manner.

Components to Formulate the Associated Problem

  • Initial State
  • Path Costing

Related Tutorials

  • Machine Learning, AI, Deep Learning, and Data Science
  • How to Learn Machine Learning and Artificial Intelligence?
  • Artificial Intelligence - Introduction
  • Artificial Intelligence: What It is, Types, Applications, Advantages and Disadvantages
  • Artificial Intelligence-based Agent
  • Types of Agents in AI
  • Classification of Environment in AI
  • PEAS Based Grouping of Agents in AI
  • Important terms used while problem solving in AI
  • Water jug problem in AI
  • Problem Solving by Searching in AI
  • Hill Climbing Search in AI
  • Best-first Search (BFS) in AI
  • Vacuum Cleaner Problem in AI
  • Constraint Satisfaction Problems in AI
  • N-Queens Problem
  • Crypt-Arithmetic Problem
  • Knowledge Representation in AI
  • Quantifiers in knowledge Representation in an AI Agent
  • What is logic in AI?
  • Knowledge-Based Agent Levels in AI
  • Backus-Naur Form (BNF) in AI
  • Uncertainty in AI – A brief Introduction
  • Reasons for Uncertainty in AI
  • Probabilistic Reasoning in AI - A way to deal with Uncertainty
  • Conditional Probability in AI
  • Bayes Theorem in Conditional Probability
  • Certainty Factor in AI
  • Inference in terms of Artificial Intelligence
  • Decision Making Under Uncertainty in AI
  • What is Fuzzy Logic in AI and Why It is used?
  • Fuzzy Logic System Architecture and Its Components in AI
  • Membership Function in Fuzzy Logic | Artificial Intelligence
  • Learning Agents in AI
  • Types of Learning in Agents in AI
  • Elements of a Learning Agent in AI
  • Reinforcement Learning: What It Is, Types, Applications
  • Artificial Communication | Artificial Intelligence
  • Components of communicating agents | Artificial Intelligence
  • Natural language processing (NLP)
  • Natural Language Understanding (NLU) Process

Comments and Discussions!

Load comments ↻

  • Marketing MCQs
  • Blockchain MCQs
  • Artificial Intelligence MCQs
  • Data Analytics & Visualization MCQs
  • Python MCQs
  • C++ Programs
  • Python Programs
  • Java Programs
  • D.S. Programs
  • Golang Programs
  • C# Programs
  • JavaScript Examples
  • jQuery Examples
  • CSS Examples
  • C++ Tutorial
  • Python Tutorial
  • ML/AI Tutorial
  • MIS Tutorial
  • Software Engineering Tutorial
  • Scala Tutorial
  • Privacy policy
  • Certificates
  • Content Writers of the Month

Copyright © 2024 www.includehelp.com. All rights reserved.

Illustration of how AI enables computers to think like humans, interconnected applications and impact on modern life

Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.

On its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention. Digital assistants, GPS guidance, autonomous vehicles, and generative AI tools (like Open AI's Chat GPT) are just a few examples of AI in the daily news and our daily lives.

As a field of computer science, artificial intelligence encompasses (and is often mentioned together with) machine learning and deep learning . These disciplines involve the development of AI algorithms, modeled after the decision-making processes of the human brain, that can ‘learn’ from available data and make increasingly more accurate classifications or predictions over time.

Artificial intelligence has gone through many cycles of hype, but even to skeptics, the release of ChatGPT seems to mark a turning point. The last time generative AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in natural language processing (NLP). Today, generative AI can learn and synthesize not just human language but other data types including images, video, software code, and even molecular structures.

Applications for AI are growing every day. But as the hype around the use of AI tools in business takes off, conversations around ai ethics and responsible ai become critically important. For more on where IBM stands on these issues, please read  Building trust in AI .

Learn about barriers to AI adoptions, particularly lack of AI governance and risk management solutions.

Register for the guide on foundation models

Weak AI—also known as narrow AI or artificial narrow intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. "Narrow" might be a more apt descriptor for this type of AI as it is anything but weak: it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM watsonx™, and self-driving vehicles.

Strong AI is made up of artificial general intelligence (AGI) and artificial super intelligence (ASI). AGI, or general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would be self-aware with a consciousness that would have the ability to solve problems, learn, and plan for the future. ASI—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman and rogue computer assistant in  2001: A Space Odyssey.

Machine learning and deep learning are sub-disciplines of AI, and deep learning is a sub-discipline of machine learning.

Both machine learning and deep learning algorithms use neural networks to ‘learn’ from huge amounts of data. These neural networks are programmatic structures modeled after the decision-making processes of the human brain. They consist of layers of interconnected nodes that extract features from the data and make predictions about what the data represents.

Machine learning and deep learning differ in the types of neural networks they use, and the amount of human intervention involved. Classic machine learning algorithms use neural networks with an input layer, one or two ‘hidden’ layers, and an output layer. Typically, these algorithms are limited to supervised learning : the data needs to be structured or labeled by human experts to enable the algorithm to extract features from the data.

Deep learning algorithms use deep neural networks—networks composed of an input layer, three or more (but usually hundreds) of hidden layers, and an output layout. These multiple layers enable unsupervised learning : they automate extraction of features from large, unlabeled and unstructured data sets. Because it doesn’t require human intervention, deep learning essentially enables machine learning at scale.

Generative AI refers to deep-learning models that can take raw data—say, all of Wikipedia or the collected works of Rembrandt—and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data.

Generative models have been used for years in statistics to analyze numerical data. The rise of deep learning, however, made it possible to extend them to images, speech, and other complex data types. Among the first class of AI models to achieve this cross-over feat were variational autoencoders, or VAEs, introduced in 2013. VAEs were the first deep-learning models to be widely used for generating realistic images and speech.

“VAEs opened the floodgates to deep generative modeling by making models easier to scale,” said Akash Srivastava , an expert on generative AI at the MIT-IBM Watson AI Lab. “Much of what we think of today as generative AI started here.”

Early examples of models, including GPT-3, BERT, or DALL-E 2, have shown what’s possible. In the future, models will be trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning. Systems that execute specific tasks in a single domain are giving way to broad AI systems that learn more generally and work across domains and problems. Foundation models, trained on large, unlabeled datasets and fine-tuned for an array of applications, are driving this shift.

As to the future of AI, when it comes to generative AI, it is predicted that foundation models will dramatically accelerate AI adoption in enterprise. Reducing labeling requirements will make it much easier for businesses to dive in, and the highly accurate, efficient AI-driven automation they enable will mean that far more companies will be able to deploy AI in a wider range of mission-critical situations. For IBM, the hope is that the computing power of foundation models can eventually be brought to every enterprise in a frictionless hybrid-cloud environment.

Explore foundation models in watsonx.ai

There are numerous, real-world applications for AI systems today. Below are some of the most common use cases:

Also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, speech recognition uses NLP to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search—Siri, for example—or provide more accessibility around texting in English or many widely-used languages.  See how Don Johnston used IBM Watson Text to Speech to improve accessibility in the classroom with our case study .

Online  virtual agents  and chatbots are replacing human agents along the customer journey. They answer frequently asked questions (FAQ) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents , messaging apps, such as Slack and Facebook Messenger, and tasks usually done by virtual assistants and  voice assistants .  See how Autodesk Inc. used IBM watsonx Assistant to speed up customer response times by 99% with our case study .

This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.  See how ProMare used IBM Maximo to set a new course for ocean research with our case study .

Adaptive robotics act on Internet of Things (IoT) device information, and structured and unstructured data to make autonomous decisions. NLP tools can understand human speech and react to what they are being told. Predictive analytics are applied to demand responsiveness, inventory and network optimization, preventative maintenance and digital manufacturing. Search and pattern recognition algorithms—which are no longer just predictive, but hierarchical—analyze real-time data, helping supply chains to react to machine-generated, augmented intelligence, while providing instant visibility and transparency. See how Hendrickson used IBM Sterling to fuel real-time transactions with our case study .

The weather models broadcasters rely on to make accurate forecasts consist of complex algorithms run on supercomputers. Machine-learning techniques enhance these models by making them more applicable and precise. See how Emnotion used IBM Cloud to empower weather-sensitive enterprises to make more proactive, data-driven decisions with our case study .

AI models can comb through large amounts of data and discover atypical data points within a dataset. These anomalies can raise awareness around faulty equipment, human error, or breaches in security.  See how Netox used IBM QRadar to protect digital businesses from cyberthreats with our case study .

The idea of "a machine that thinks" dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

  • 1950:  Alan Turing publishes Computing Machinery and Intelligence  (link resides outside ibm.com) .  In this paper, Turing—famous for breaking the German ENIGMA code during WWII and often referred to as the "father of computer science"— asks the following question: "Can machines think?"  From there, he offers a test, now famously known as the "Turing Test," where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since it was published, it remains an important part of the history of AI, as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.
  • 1956:  John McCarthy coins the term "artificial intelligence" at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.
  • 1967:  Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that "learned" though trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled  Perceptrons , which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
  • 1980s:  Neural networks which use a backpropagation algorithm to train itself become widely used in AI applications.
  • 1995 : Stuart Russell and Peter Norvig publish  Artificial Intelligence: A Modern Approach  (link resides outside ibm.com), which becomes one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting.
  • 1997:  IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
  • 2004 : John McCarthy writes a paper, What Is Artificial Intelligence?  (link resides outside ibm.com), and proposes an often-cited definition of AI.
  • 2011:  IBM Watson beats champions Ken Jennings and Brad Rutter at  Jeopardy!
  • 2015:  Baidu's Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
  • 2016:  DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.
  • 2023 : A rise in large language models, or LLMs, such as ChatGPT, create an enormous change in performance of AI and its potential to drive enterprise value. With these new generative AI practices, deep-learning models can be pre-trained on vast amounts of raw, unlabeled data.

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

AI is changing the game for cybersecurity, analyzing massive quantities of risk data to speed response times and augment under-resourced security operations.

Learn how to use the model selection framework to select the foundation model for your business needs.

Access our full catalog of over 100 online courses by purchasing an individual or multi-user digital learning subscription today, enabling you to expand your skills across a range of our products at one low price.

IBM watsonx Assistant recognized as a Customers' Choice in the 2023 Gartner Peer Insights Voice of the Customer report for Enterprise Conversational AI platforms

Discover how machine learning can predict demand and cut costs.

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Javatpoint Logo

Artificial Intelligence

Control System

  • Interview Q

Intelligent Agent

Problem-solving, adversarial search, knowledge represent, uncertain knowledge r., subsets of ai, artificial intelligence mcq, related tutorials.

JavaTpoint

  • Send your Feedback to [email protected]

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

RSS Feed

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

AI accelerates problem-solving in complex scenarios

Press contact :.

A stylized Earth has undulating, glowing teal pathways leading everywhere.

Previous image Next image

While Santa Claus may have a magical sleigh and nine plucky reindeer to help him deliver presents, for companies like FedEx, the optimization problem of efficiently routing holiday packages is so complicated that they often employ specialized software to find a solution.

This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution.

The process is so onerous that a company often must stop the software partway through, accepting a solution that is not ideal but the best that could be generated in a set amount of time.

Researchers from MIT and ETH Zurich used machine learning to speed things up.

They identified a key intermediate step in MILP solvers that has so many potential solutions it takes an enormous amount of time to unravel, which slows the entire process. The researchers employed a filtering technique to simplify this step, then used machine learning to find the optimal solution for a specific type of problem.

Their data-driven approach enables a company to use its own data to tailor a general-purpose MILP solver to the problem at hand.

This new technique sped up MILP solvers between 30 and 70 percent, without any drop in accuracy. One could use this method to obtain an optimal solution more quickly or, for especially complex problems, a better solution in a tractable amount of time.

This approach could be used wherever MILP solvers are employed, such as by ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a thorny resource-allocation problem.

“Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

Wu wrote the paper with co-lead authors Sirui Li, an IDSS graduate student, and Wenbin Ouyang, a CEE graduate student; as well as Max Paulus, a graduate student at ETH Zurich. The research will be presented at the Conference on Neural Information Processing Systems.

Tough to solve

MILP problems have an exponential number of potential solutions. For instance, say a traveling salesperson wants to find the shortest path to visit several cities and then return to their city of origin. If there are many cities which could be visited in any order, the number of potential solutions might be greater than the number of atoms in the universe.  

“These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” Wu explains.

An MILP solver employs an array of techniques and practical tricks that can achieve reasonable solutions in a tractable amount of time.

A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces with a technique called branching. Then, the solver employs a technique called cutting to tighten up these smaller pieces so they can be searched faster.

Cutting uses a set of rules that tighten the search space without removing any feasible solutions. These rules are generated by a few dozen algorithms, known as separators, that have been created for different kinds of MILP problems. 

Wu and her team found that the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem with an exponential number of solutions.

“Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” she says.

Shrinking the solution space

She and her collaborators devised a filtering mechanism that reduces this separator search space from more than 130,000 potential combinations to around 20 options. This filtering mechanism draws on the principle of diminishing marginal returns, which says that the most benefit would come from a small set of algorithms, and adding additional algorithms won’t bring much extra improvement.

Then they use a machine-learning model to pick the best combination of algorithms from among the 20 remaining options.

This model is trained with a dataset specific to the user’s optimization problem, so it learns to choose algorithms that best suit the user’s particular task. Since a company like FedEx has solved routing problems many times before, using real data gleaned from past experience should lead to better solutions than starting from scratch each time.

The model’s iterative learning process, known as contextual bandits, a form of reinforcement learning, involves picking a potential solution, getting feedback on how good it was, and then trying again to find a better solution.

This data-driven approach accelerated MILP solvers between 30 and 70 percent without any drop in accuracy. Moreover, the speedup was similar when they applied it to a simpler, open-source solver and a more powerful, commercial solver.

In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says. The researchers are also interested in interpreting the learned model to better understand the effectiveness of different separator algorithms.

This research is supported, in part, by Mathworks, the National Science Foundation (NSF), the MIT Amazon Science Hub, and MIT’s Research Support Committee.

Share this news article on:

Related links.

  • Project website
  • Laboratory for Information and Decision Systems
  • Institute for Data, Systems, and Society
  • Department of Civil and Environmental Engineering

Related Topics

  • Computer science and technology
  • Artificial intelligence
  • Laboratory for Information and Decision Systems (LIDS)
  • Civil and environmental engineering
  • National Science Foundation (NSF)

Related Articles

Illustration of a blue car next to a larger-than-life smartphone showing a city map. Both are seen with a city in the background.

Machine learning speeds up vehicle routing

Headshot photo of Cathy Wu, who is standing in front of a bookcase.

Q&A: Cathy Wu on developing algorithms to safely integrate robots into our world

“What this study shows is that rather than shut down nuclear plants, you can operate them in a way that makes room for renewables,” says MIT Energy Initiative researcher Jesse Jenkins. “It shows that flexible nuclear plants can play much better with variable renewables than many people think, which might lead to reevaluations of the role of these two resources together.”

Keeping the balance: How flexible nuclear operation can help add more wind and solar to the grid

Previous item Next item

More MIT News

Headshot of a woman in a colorful striped dress.

A biomedical engineer pivots from human movement to women’s health

Read full story →

Closeup of someone’s hands holding a stack of U.S. patents. The top page reads “United States of America “ and “Patent” in gold lettering, among other smaller text. They are next to a window that looks down on a city street.

MIT tops among single-campus universities in US patents granted

Jennifer Rupp, Thomas Defferriere, Harry Tuller, and Ju Li pose standing in a lab, with a nuclear radiation warning sign in the background

A new way to detect radiation involving cheap ceramics

Photo of the facade of the MIT Schwarzman College of Computing building, which features a shingled glass exterior that reflects its surroundings

A crossroads for computing at MIT

Hammaad Adam poses in front of a window. A brick building with large windows is behind him.

Growing our donated organ supply

Two hands inspect a lung X-ray. One hand is illustrated with nodes and lines creating a neural network. The other is a doctor’s hand. Four “alert” icons appear on the lung X-ray.

New AI method captures uncertainty in medical images

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

SciTechDaily

  • April 11, 2024 | Unlocking the Future of VR: New Algorithm Turns iPhones Into Holographic Projectors
  • April 11, 2024 | Light-Matter Particle Breakthrough Could Change Displays Forever
  • April 11, 2024 | Unlocking AI’s Black Box: New Formula Explains How They Detect Relevant Patterns
  • April 11, 2024 | Sunflower Secrets Unveiled: Multiple Origins of Flower Symmetry Discovered
  • April 11, 2024 | Unlocking Brain Health Through the Science of Nutrition

The Intersection of Math and AI: A New Era in Problem-Solving

By Whitney Clavin, California Institute of Technology (Caltech) December 11, 2023

Connecting Math and Machine Learning

The Mathematics and Machine Learning 2023 conference at Caltech highlights the growing integration of machine learning in mathematics, offering new solutions to complex problems and advancing algorithm development.

Conference is exploring burgeoning connections between the two fields.

Traditionally, mathematicians jot down their formulas using paper and pencil, seeking out what they call pure and elegant solutions. In the 1970s, they hesitantly began turning to computers to assist with some of their problems. Decades later, computers are often used to crack the hardest math puzzles. Now, in a similar vein, some mathematicians are turning to machine learning tools to aid in their numerical pursuits.

Embracing Machine Learning in Mathematics

“Mathematicians are beginning to embrace machine learning,” says Sergei Gukov, the John D. MacArthur Professor of Theoretical Physics and Mathematics at Caltech, who put together the Mathematics and Machine Learning 2023 conference, which is taking place at Caltech December 10–13.

“There are some mathematicians who may still be skeptical about using the tools,” Gukov says. “The tools are mischievous and not as pure as using paper and pencil, but they work.”

Machine Learning: A New Era in Mathematical Problem Solving

Machine learning is a subfield of AI, or artificial intelligence, in which a computer program is trained on large datasets and learns to find new patterns and make predictions. The conference, the first put on by the new Richard N. Merkin Center for Pure and Applied Mathematics, will help bridge the gap between developers of machine learning tools (the data scientists) and the mathematicians. The goal is to discuss ways in which the two fields can complement each other.

Mathematics and Machine Learning: A Two-Way Street

“It’s a two-way street,” says Gukov, who is the director of the new Merkin Center, which was established by Caltech Trustee Richard Merkin.

“Mathematicians can help come up with clever new algorithms for machine learning tools like the ones used in generative AI programs like ChatGPT, while machine learning can help us crack difficult math problems.”

Yi Ni, a professor of mathematics at Caltech, plans to attend the conference, though he says he does not use machine learning in his own research, which involves the field of topology and, specifically, the study of mathematical knots in lower dimensions. “Some mathematicians are more familiar with these advanced tools than others,” Ni says. “You need to know somebody who is an expert in machine learning and willing to help. Ultimately, I think AI for math will become a subfield of math.”

The Riemann Hypothesis and Machine Learning

One tough problem that may unravel with the help of machine learning, according to Gukov, is known as the Riemann hypothesis. Named after the 19th-century mathematician Bernhard Riemann, this problem is one of seven Millennium Problems selected by the Clay Mathematics Institute; a $1 million prize will be awarded for the solution to each problem.

The Riemann hypothesis centers around a formula known as the Riemann zeta function, which packages information about prime numbers. If proved true, the hypothesis would provide a new understanding of how prime numbers are distributed. Machine learning tools could help crack the problem by providing a new way to run through more possible iterations of the problem.

Mathematicians and Machine Learning: A Synergistic Relationship

“Machine learning tools are very good at recognizing patterns and analyzing very complex problems,” Gukov says.

Ni agrees that machine learning can serve as a helpful assistant. “Machine learning solutions may not be as beautiful, but they can find new connections,” he says. “But you still need a mathematician to turn the questions into something computers can solve.”

Knot Theory and Machine Learning

Gukov has used machine learning himself to untangle problems in knot theory. Knot theory is the study of abstract knots, which are similar to the knots you might find on a shoestring, but the ends of the strings are closed into loops. These mathematical knots can be entwined in various ways, and mathematicians like Gukov want to understand their structures and how they relate to each other. The work has relationships to other fields of mathematics such as representation theory and quantum algebra, and even quantum physics.

In particular, Gukov and his colleagues are working to solve what is called the smooth Poincaré conjecture in four dimensions. The original Poincaré conjecture, which is also a Millennium Problem, was proposed by mathematician Henri Poincaré early in the 20th century. It was ultimately solved from 2002 to 2003 by Grigori Perelman (who famously turned down his prize of $1 million). The problem involves comparing spheres to certain types of manifolds that look like spheres; manifolds are shapes that are projections of higher-dimensional objects onto lower dimensions. Gukov says the problem is like asking, “Are objects that look like spheres really spheres?”

The four-dimensional smooth Poincaré conjecture holds that, in four dimensions, all manifolds that look like spheres are indeed actually spheres. In an attempt to solve this conjecture, Gukov and his team develop a machine learning approach to evaluate so-called ribbon knots.

“Our brain cannot handle four dimensions, so we package shapes into knots,” Gukov says. “A ribbon is where the string in a knot pierces through a different part of the string in three dimensions but doesn’t pierce through anything in four dimensions. Machine learning lets us analyze the ‘ribboness’ of knots, a yes-or-no property of knots that has applications to the smooth Poincaré conjecture.”

“This is where machine learning comes to the rescue,” writes Gukov and his team in a preprint paper titled “ Searching for Ribbons with Machine Learning .” “It has the ability to quickly search through many potential solutions and, more importantly, to improve the search based on the successful ‘games’ it plays. We use the word ‘games’ since the same types of algorithms and architectures can be employed to play complex board games, such as Go or chess, where the goals and winning strategies are similar to those in math problems.”

The Interplay of Mathematics and Machine Learning Algorithms

On the flip side, math can help in developing machine learning algorithms, Gukov explains. A mathematical mindset, he says, can bring fresh ideas to the development of the algorithms behind AI tools. He cites Peter Shor as an example of a mathematician who brought insight to computer science problems. Shor, who graduated from Caltech with a bachelor’s degree in mathematics in 1981, famously came up with what is known as Shor’s algorithm, a set of rules that could allow quantum computers of the future to factor integers faster than typical computers, thereby breaking digital encryption codes.

Today’s machine learning algorithms are trained on large sets of data. They churn through mountains of data on language, images, and more to recognize patterns and come up with new connections. However, data scientists don’t always know how the programs reach their conclusions. The inner workings are hidden in a so-called “black box.” A mathematical approach to developing the algorithms would reveal what’s happening “under the hood,” as Gukov says, leading to a deeper understanding of how the algorithms work and thus can be improved.

“Math,” says Gukov, “is fertile ground for new ideas.”

The conference will take place at the Merkin Center on the eighth floor of Caltech Hall.

More on SciTechDaily

Oil Patches on Water

Ultrathin Durable Membrane Developed for High-Performance Oil and Water Separation

GEDI Moves Toward Launch to Space Station

GEDI Laser Instrument Moves Toward Launch to Space Station

Plant Growth Functional Elements

Flexible Robot Designed to “Grow” Like a Plant Snakes Through Tight Spaces, Lifts Heavy Loads [Video]

Visual Cortex Activation

Functional Magnetic Resonance Imaging Shows How the Brain Repurposes Unused Regions

Copper Slag Cyprus

How Copper Deposits Turned a Village Into One of the Most Important Trade Hubs of the Late Bronze Age

Illustration of JUICE Spacecraft at Jupiter

Sticky Situation: Critical Antenna on ESA’s Jupiter Icy Moons Explorer Fails To Deploy

Science made simple: what are high energy density laboratory plasmas.

Allie Richard Lane

Large Increase in Nitrate Levels Found in Rural Water Wells in High Plains Aquifer

Be the first to comment on "the intersection of math and ai: a new era in problem-solving", leave a comment cancel reply.

Email address is optional. If provided, your email will not be published or shared.

Save my name, email, and website in this browser for the next time I comment.

What is AI (artificial intelligence)?

3D robotics hand

Humans and machines: a match made in productivity  heaven. Our species wouldn’t have gotten very far without our mechanized workhorses. From the wheel that revolutionized agriculture to the screw that held together increasingly complex construction projects to the robot-enabled assembly lines of today, machines have made life as we know it possible. And yet, despite their seemingly endless utility, humans have long feared machines—more specifically, the possibility that machines might someday acquire human intelligence  and strike out on their own.

Get to know and directly engage with senior McKinsey experts on AI

Sven Blumberg is a senior partner in McKinsey’s Düsseldorf office; Michael Chui is a partner at the McKinsey Global Institute and is based in the Bay Area office, where Lareina Yee is a senior partner; Kia Javanmardian is a senior partner in the Chicago office, where Alex Singla , the global leader of QuantumBlack, AI by McKinsey, is also a senior partner; Kate Smaje and Alex Sukharevsky are senior partners in the London office.

But we tend to view the possibility of sentient machines with fascination as well as fear. This curiosity has helped turn science fiction into actual science. Twentieth-century theoreticians, like computer scientist and mathematician Alan Turing, envisioned a future where machines could perform functions faster than humans. The work of Turing and others soon made this a reality. Personal calculators became widely available in the 1970s, and by 2016, the US census showed that 89 percent of American households had a computer. Machines— smart machines at that—are now just an ordinary part of our lives and culture.

Those smart machines are also getting faster and more complex. Some computers have now crossed the exascale threshold, meaning they can perform as many calculations in a single second as an individual could in 31,688,765,000 years . And beyond computation, which machines have long been faster at than we have, computers and other devices are now acquiring skills and perception that were once unique to humans and a few other species.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem-solving, and even exercising creativity. You’ve probably interacted with AI even if you don’t realize it—voice assistants like Siri and Alexa are founded on AI technology, as are some customer service chatbots that pop up to help you navigate websites.

Applied AI —simply, artificial intelligence applied to real-world problems—has serious implications for the business world. By using artificial intelligence, companies have the potential to make business more efficient and profitable. But ultimately, the value of AI isn’t in the systems themselves. Rather, it’s in how companies use these systems to assist humans—and their ability to explain to shareholders and the public what these systems do—in a way that builds trust and confidence.

For more about AI, its history, its future, and how to apply it in business, read on.

Learn more about QuantumBlack, AI by McKinsey .

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

What is machine learning.

Machine learning is a form of artificial intelligence that can adapt to a wide range of inputs, including large sets of historical data, synthesized data, or human inputs. (Some machine learning algorithms are specialized in training themselves to detect patterns; this is called deep learning. See Exhibit 1.) These algorithms can detect patterns and learn how to make predictions and recommendations by processing data, rather than by receiving explicit programming instruction. Some algorithms can also adapt in response to new data and experiences to improve over time.

The volume and complexity of data that is now being generated, too vast for humans to process and apply efficiently, has increased the potential of machine learning, as well as the need for it. In the years since its widespread deployment, which began in the 1970s, machine learning has had an impact on a number of industries, including achievements in medical-imaging analysis  and high-resolution weather forecasting.

The volume and complexity of data that is now being generated, too vast for humans to process and apply efficiently, has increased the potential of machine learning, as well as the need for it.

What is deep learning?

Deep learning is a more advanced version of machine learning that is particularly adept at processing a wider range of data resources (text as well as unstructured data including images), requires even less human intervention, and can often produce more accurate results than traditional machine learning. Deep learning uses neural networks—based on the ways neurons interact in the human brain —to ingest data and process it through multiple neuron layers that recognize increasingly complex features of the data. For example, an early layer might recognize something as being in a specific shape; building on this knowledge, a later layer might be able to identify the shape as a stop sign. Similar to machine learning, deep learning uses iteration to self-correct and improve its prediction capabilities. For example, once it “learns” what a stop sign looks like, it can recognize a stop sign in a new image.

What is generative AI?

Case study: vistra and the martin lake power plant.

Vistra is a large power producer in the United States, operating plants in 12 states with a capacity to power nearly 20 million homes. Vistra has committed to achieving net-zero emissions by 2050. In support of this goal, as well as to improve overall efficiency, QuantumBlack, AI by McKinsey worked with Vistra to build and deploy an AI-powered heat rate optimizer (HRO) at one of its plants.

“Heat rate” is a measure of the thermal efficiency of the plant; in other words, it’s the amount of fuel required to produce each unit of electricity. To reach the optimal heat rate, plant operators continuously monitor and tune hundreds of variables, such as steam temperatures, pressures, oxygen levels, and fan speeds.

Vistra and a McKinsey team, including data scientists and machine learning engineers, built a multilayered neural network model. The model combed through two years’ worth of data at the plant and learned which combination of factors would attain the most efficient heat rate at any point in time. When the models were accurate to 99 percent or higher and run through a rigorous set of real-world tests, the team converted them into an AI-powered engine that generates recommendations every 30 minutes for operators to improve the plant’s heat rate efficiency. One seasoned operations manager at the company’s plant in Odessa, Texas, said, “There are things that took me 20 years to learn about these power plants. This model learned them in an afternoon.”

Overall, the AI-powered HRO helped Vistra achieve the following:

  • approximately 1.6 million metric tons of carbon abated annually
  • 67 power generators optimized
  • $60 million saved in about a year

Read more about the Vistra story here .

Generative AI (gen AI) is an AI model that generates content in response to a prompt. It’s clear that generative AI tools like ChatGPT and DALL-E (a tool for AI-generated art) have the potential to change how a range of jobs  are performed. Much is still unknown about gen AI’s potential, but there are some questions we can answer—like how gen AI models are built, what kinds of problems they are best suited to solve, and how they fit into the broader category of AI and machine learning.

For more on generative AI and how it stands to affect business and society, check out our Explainer “ What is generative AI? ”

What is the history of AI?

The term “artificial intelligence” was coined in 1956  by computer scientist John McCarthy for a workshop at Dartmouth. But he wasn’t the first to write about the concepts we now describe as AI. Alan Turing introduced the concept of the “ imitation game ” in a 1950 paper. That’s the test of a machine’s ability to exhibit intelligent behavior, now known as the “Turing test.” He believed researchers should focus on areas that don’t require too much sensing and action, things like games and language translation. Research communities dedicated to concepts like computer vision, natural language understanding, and neural networks are, in many cases, several decades old.

MIT physicist Rodney Brooks shared details on the four previous stages of AI:

Symbolic AI (1956). Symbolic AI is also known as classical AI, or even GOFAI (good old-fashioned AI). The key concept here is the use of symbols and logical reasoning to solve problems. For example, we know a German shepherd is a dog , which is a mammal; all mammals are warm-blooded; therefore, a German shepherd should be warm-blooded.

The main problem with symbolic AI is that humans still need to manually encode their knowledge of the world into the symbolic AI system, rather than allowing it to observe and encode relationships on its own. As a result, symbolic AI systems struggle with situations involving real-world complexity. They also lack the ability to learn from large amounts of data.

Symbolic AI was the dominant paradigm of AI research until the late 1980s.

Neural networks (1954, 1969, 1986, 2012). Neural networks are the technology behind the recent explosive growth of gen AI. Loosely modeling the ways neurons interact in the human brain , neural networks ingest data and process it through multiple iterations that learn increasingly complex features of the data. The neural network can then make determinations about the data, learn whether a determination is correct, and use what it has learned to make determinations about new data. For example, once it “learns” what an object looks like, it can recognize the object in a new image.

Neural networks were first proposed in 1943 in an academic paper by neurophysiologist Warren McCulloch and logician Walter Pitts. Decades later, in 1969, two MIT researchers mathematically demonstrated that neural networks could perform only very basic tasks. In 1986, there was another reversal, when computer scientist and cognitive psychologist Geoffrey Hinton and colleagues solved the neural network problem presented by the MIT researchers. In the 1990s, computer scientist Yann LeCun made major advancements in neural networks’ use in computer vision, while Jürgen Schmidhuber advanced the application of recurrent neural networks as used in language processing.

In 2012, Hinton and two of his students highlighted the power of deep learning. They applied Hinton’s algorithm to neural networks with many more layers than was typical, sparking a new focus on deep neural networks. These have been the main AI approaches of recent years.

Traditional robotics (1968). During the first few decades of AI, researchers built robots to advance research. Some robots were mobile, moving around on wheels, while others were fixed, with articulated arms. Robots used the earliest attempts at computer vision to identify and navigate through their environments or to understand the geometry of objects and maneuver them. This could include moving around blocks of various shapes and colors. Most of these robots, just like the ones that have been used in factories for decades, rely on highly controlled environments with thoroughly scripted behaviors that they perform repeatedly. They have not contributed significantly to the advancement of AI itself.

But traditional robotics did have significant impact in one area, through a process called “simultaneous localization and mapping” (SLAM). SLAM algorithms helped contribute to self-driving cars and are used in consumer products like vacuum cleaning robots and quadcopter drones. Today, this work has evolved into behavior-based robotics, also referred to as haptic technology because it responds to human touch.

  • Behavior-based robotics (1985). In the real world, there aren’t always clear instructions for navigation, decision making, or problem-solving. Insects, researchers observed, navigate very well (and are evolutionarily very successful) with few neurons. Behavior-based robotics researchers took inspiration from this, looking for ways robots could solve problems with partial knowledge and conflicting instructions. These behavior-based robots are embedded with neural networks.

Learn more about  QuantumBlack, AI by McKinsey .

What is artificial general intelligence?

The term “artificial general intelligence” (AGI) was coined to describe AI systems that possess capabilities comparable to those of a human . In theory, AGI could someday replicate human-like cognitive abilities including reasoning, problem-solving, perception, learning, and language comprehension. But let’s not get ahead of ourselves: the key word here is “someday.” Most researchers and academics believe we are decades away from realizing AGI; some even predict we won’t see AGI this century, or ever. Rodney Brooks, an MIT roboticist and cofounder of iRobot, doesn’t believe AGI will arrive until the year 2300 .

The timing of AGI’s emergence may be uncertain. But when it does emerge—and it likely will—it’s going to be a very big deal, in every aspect of our lives. Executives should begin working to understand the path to machines achieving human-level intelligence now and making the transition to a more automated world.

For more on AGI, including the four previous attempts at AGI, read our Explainer .

What is narrow AI?

Narrow AI is the application of AI techniques to a specific and well-defined problem, such as chatbots like ChatGPT, algorithms that spot fraud in credit card transactions, and natural-language-processing engines that quickly process thousands of legal documents. Most current AI applications fall into the category of narrow AI. AGI is, by contrast, AI that’s intelligent enough to perform a broad range of tasks.

How is the use of AI expanding?

AI is a big story for all kinds of businesses, but some companies are clearly moving ahead of the pack . Our state of AI in 2022 survey showed that adoption of AI models has more than doubled since 2017—and investment has increased apace. What’s more, the specific areas in which companies see value from AI have evolved, from manufacturing and risk to the following:

  • marketing and sales
  • product and service development
  • strategy and corporate finance

One group of companies is pulling ahead of its competitors. Leaders of these organizations consistently make larger investments in AI, level up their practices to scale faster, and hire and upskill the best AI talent. More specifically, they link AI strategy to business outcomes and “ industrialize ” AI operations by designing modular data architecture that can quickly accommodate new applications.

What are the limitations of AI models? How can these potentially be overcome?

We have yet to see the longtail effect of gen AI models. This means there are some inherent risks involved in using them—both known and unknown.

The outputs gen AI models produce may often sound extremely convincing. This is by design. But sometimes the information they generate is just plain wrong. Worse, sometimes it’s biased (because it’s built on the gender, racial, and other biases of the internet and society more generally).

It can also be manipulated to enable unethical or criminal activity. Since gen AI models burst onto the scene, organizations have become aware of users trying to “jailbreak” the models—that means trying to get them to break their own rules and deliver biased, harmful, misleading, or even illegal content. Gen AI organizations are responding to this threat in two ways: for one thing, they’re collecting feedback from users on inappropriate content. They’re also combing through their databases, identifying prompts that led to inappropriate content, and training the model against these types of generations.

But awareness and even action don’t guarantee that harmful content won’t slip the dragnet. Organizations that rely on gen AI models should be aware of the reputational and legal risks involved in unintentionally publishing biased, offensive, or copyrighted content.

These risks can be mitigated, however, in a few ways. “Whenever you use a model,” says McKinsey partner Marie El Hoyek, “you need to be able to counter biases  and instruct it not to use inappropriate or flawed sources, or things you don’t trust.” How? For one thing, it’s crucial to carefully select the initial data used to train these models to avoid including toxic or biased content. Next, rather than employing an off-the-shelf gen AI model, organizations could consider using smaller, specialized models. Organizations with more resources could also customize a general model based on their own data to fit their needs and minimize biases.

It’s also important to keep a human in the loop (that is, to make sure a real human checks the output of a gen AI model before it is published or used) and avoid using gen AI models for critical decisions, such as those involving significant resources or human welfare.

It can’t be emphasized enough that this is a new field. The landscape of risks and opportunities is likely to continue to change rapidly in the coming years. As gen AI becomes increasingly incorporated into business, society, and our personal lives, we can also expect a new regulatory climate to take shape. As organizations experiment—and create value—with these tools, leaders will do well to keep a finger on the pulse of regulation and risk.

What is the AI Bill of Rights?

The Blueprint for an AI Bill of Rights, prepared by the US government in 2022, provides a framework for how government, technology companies, and citizens can collectively ensure more accountable AI. As AI has become more ubiquitous, concerns have surfaced  about a potential lack of transparency surrounding the functioning of gen AI systems, the data used to train them, issues of bias and fairness, potential intellectual property infringements, privacy violations, and more. The Blueprint comprises five principles that the White House says should “guide the design, use, and deployment of automated systems to protect [users] in the age of artificial intelligence.” They are as follows:

  • The right to safe and effective systems. Systems should undergo predeployment testing, risk identification and mitigation, and ongoing monitoring to demonstrate that they are adhering to their intended use.
  • Protections against discrimination by algorithms. Algorithmic discrimination is when automated systems contribute to unjustified different treatment of people based on their race, color, ethnicity, sex, religion, age, and more.
  • Protections against abusive data practices, via built-in safeguards. Users should also have agency over how their data is used.
  • The right to know that an automated system is being used, and a clear explanation of how and why it contributes to outcomes that affect the user.
  • The right to opt out, and access to a human who can quickly consider and fix problems.

At present, more than 60 countries or blocs have national strategies governing the responsible use of AI (Exhibit 2). These include Brazil, China, the European Union, Singapore, South Korea, and the United States. The approaches taken vary from guidelines-based approaches, such as the Blueprint for an AI Bill of Rights in the United States, to comprehensive AI regulations that align with existing data protection and cybersecurity regulations, such as the EU’s AI Act, due in 2024.

There are also collaborative efforts between countries to set out standards for AI use. The US–EU Trade and Technology Council is working toward greater alignment between Europe and the United States. The Global Partnership on Artificial Intelligence, formed in 2020, has 29 members including Brazil, Canada, Japan, the United States, and several European countries.

Even though AI regulations are still being developed, organizations should act now to avoid legal, reputational, organizational, and financial risks. In an environment of public concern, a misstep could be costly. Here are four no-regrets, preemptive actions organizations can implement today:

  • Transparency. Create an inventory of models, classifying them in accordance with regulation, and record all usage across the organization that is clear to those inside and outside the organization.
  • Governance. Implement a governance structure for AI and gen AI that ensures sufficient oversight, authority, and accountability both within the organization and with third parties and regulators.
  • Data management. Proper data management includes awareness of data sources, data classification, data quality and lineage, intellectual property, and privacy management.
  • Model management. Organizations should establish principles and guardrails for AI development and use them to ensure all AI models uphold fairness and bias controls.
  • Cybersecurity and technology management. Establish strong cybersecurity and technology to ensure a secure environment where unauthorized access or misuse is prevented.
  • Individual rights. Make users aware when they are interacting with an AI system, and provide clear instructions for use.

How can organizations scale up their AI efforts from ad hoc projects to full integration?

Most organizations are dipping a toe into the AI pool—not cannonballing. Slow progress toward widespread adoption is likely due to cultural and organizational barriers. But leaders who effectively break down these barriers will be best placed to capture the opportunities of the AI era. And—crucially—companies that can’t take full advantage of AI are already being sidelined by those that can, in industries like auto manufacturing and financial services.

To scale up AI, organizations can make three major shifts :

  • Move from siloed work to interdisciplinary collaboration. AI projects shouldn’t be limited to discrete pockets of organizations. Rather, AI has the biggest impact when it’s employed by cross-functional teams with a mix of skills and perspectives, enabling AI to address broad business priorities.
  • Empower frontline data-based decision making . AI has the potential to enable faster, better decisions at all levels of an organization. But for this to work, people at all levels need to trust the algorithms’ suggestions and feel empowered to make decisions. (Equally, people should be able to override the algorithm or make suggestions for improvement when necessary.)
  • Adopt and bolster an agile mindset. The agile test-and-learn mindset will help reframe mistakes as sources of discovery, allaying the fear of failure and speeding up development.

Learn more about QuantumBlack, AI by McKinsey , and check out AI-related job opportunities if you’re interested in working at McKinsey.

Articles referenced:

  • “ As gen AI advances, regulators—and risk functions—rush to keep pace ,” December 21, 2023, Andreas Kremer, Angela Luget , Daniel Mikkelsen , Henning Soller , Malin Strandell-Jansson, and Sheila Zingg
  • “ What is generative AI? ,” January 19, 2023
  • “ Tech highlights from 2022—in eight charts ,” December 22, 2022
  • “ Generative AI is here: How tools like ChatGPT could change your business ,” December 20, 2022, Michael Chui , Roger Roberts , and Lareina Yee  
  • “ The state of AI in 2022—and a half decade in review ,” December 6, 2022, Michael Chui , Bryce Hall , Helen Mayhew , Alex Singla , and Alex Sukharevsky  
  • “ Why businesses need explainable AI—and how to deliver it ,” September 29, 2022, Liz Grennan , Andreas Kremer, Alex Singla , and Peter Zipparo
  • “ Why digital trust truly matters ,” September 12, 2022, Jim Boehm , Liz Grennan , Alex Singla , and Kate Smaje
  • “ McKinsey Technology Trends Outlook 2023 ,” July 20, 2023, Michael Chui , Mena Issler, Roger Roberts , and Lareina Yee  
  • “ An AI power play: Fueling the next wave of innovation in the energy sector ,” May 12, 2022, Barry Boswell, Sean Buckley, Ben Elliott, Matias Melero , and Micah Smith  
  • “ Scaling AI like a tech native: The CEO’s role ,” October 13, 2021, Jacomo Corbo, David Harvey, Nicolas Hohn, Kia Javanmardian , and Nayur Khan
  • “ What the draft European Union AI regulations mean for business ,” August 10, 2021, Misha Benjamin, Kevin Buehler , Rachel Dooley, and Peter Zipparo
  • “ Winning with AI is a state of mind ,” April 30, 2021, Thomas Meakin , Jeremy Palmer, Valentina Sartori , and Jamie Vickers
  • “ Breaking through data-architecture gridlock to scale AI ,” January 26, 2021, Sven Blumberg , Jorge Machado , Henning Soller , and Asin Tavakoli  
  • “ An executive’s guide to AI ,” November 17, 2020, Michael Chui , Brian McCarthy, and Vishnu Kamalnath
  • “ Executive’s guide to developing AI at scale ,” October 28, 2020, Nayur Khan , Brian McCarthy, and Adi Pradhan
  • “ An executive primer on artificial general intelligence ,” April 29, 2020, Federico Berruti , Pieter Nel, and Rob Whiteman
  • “ The analytics academy: Bridging the gap between human and artificial intelligence ,” McKinsey Quarterly , September 25, 2019, Solly Brown, Darshit Gandhi, Louise Herring , and Ankur Puri  

This article was updated in April 2024; it was originally published in April 2023.

3D robotics hand

Want to know more about AI?

Related articles.

Abstract light explosion render. Beams of light shooting free from center of mass. - stock photo

Ten unsung digital and AI ideas shaping business

Complex Digital Structure Growing Endlessly - Intricate Connection Lines Symbolizing Innovative Artificial Intelligence Or Big Data Models

Driving innovation with generative AI

Video of colorful hexagons overlapping with shifting light.

As gen AI advances, regulators—and risk functions—rush to keep pace

More From Forbes

How leaders are using ai as a problem-solving tool.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Leaders face more complex decisions than ever before. For example, many must deliver new and better services for their communities while meeting sustainability and equity goals. At the same time, many need to find ways to operate and manage their budgets more efficiently. So how can these leaders make complex decisions and get them right in an increasingly tricky business landscape? The answer lies in harnessing technological tools like Artificial Intelligence (AI).

CHONGQING, CHINA - AUGUST 22: A visitor interacts with a NewGo AI robot during the Smart China Expo ... [+] 2022 on August 22, 2022 in Chongqing, China. The expo, held annually in Chongqing since 2018, is a platform to promote global exchanges of smart technologies and international cooperation in the smart industry. (Photo by Chen Chao/China News Service via Getty Images)

What is AI?

AI can help leaders in several different ways. It can be used to process and make decisions on large amounts of data more quickly and accurately. AI can also help identify patterns and trends that would otherwise be undetectable. This information can then be used to inform strategic decision-making, which is why AI is becoming an increasingly important tool for businesses and governments. A recent study by PwC found that 52% of companies accelerated their AI adoption plans in the last year. In addition, 86% of companies believe that AI will become a mainstream technology at their company imminently. As AI becomes more central in the business world, leaders need to understand how this technology works and how they can best integrate it into their operations.

At its simplest, AI is a computer system that can learn and work independently without human intervention. This ability makes AI a powerful tool. With AI, businesses and public agencies can automate tasks, get insights from data, and make decisions with little or no human input. Consequently, AI can be a valuable problem-solving tool for leaders across the private and public sectors, primarily through three methods.

1) Automation

One of AI’s most beneficial ways to help leaders is by automating tasks. This can free up time to focus on other essential things. For example, AI can help a city save valuable human resources by automating parking enforcement. In addition, this will help improve the accuracy of detecting violations and prevent costly mistakes. Automation can also help with things like appointment scheduling and fraud detection.

2) Insights from data

Another way AI can help leaders solve problems is by providing insights from data. With AI, businesses can gather large amounts of data and then use that data to make better decisions. For example, suppose a company is trying to decide which products to sell. In that case, AI can be used to gather data about customer buying habits and then use that data to make recommendations about which products to market.

Best Travel Insurance Companies

Best covid-19 travel insurance plans.

3) Simulations

Finally, AI can help leaders solve problems by allowing them to create simulations. With AI, organizations can test out different decision scenarios and see what the potential outcomes could be. This can help leaders make better decisions by examining the consequences of their choices. For example, a city might use AI to simulate different traffic patterns to see how a new road layout would impact congestion.

Choosing the Right Tools

Artificial intelligence and machine learning technologies can revolutionize how governments and businesses solve real-world problems,” said Chris Carson, CEO of Hayden AI, a global leader in intelligent enforcement technologies powered by artificial intelligence. His company addresses a problem once thought unsolvable in the transit world: managing illegal parking in bus lanes in a cost effective, scalable way.

Illegal parking in bus lanes is a major problem for cities and their transit agencies. Cars and trucks illegally parked in bus lanes force buses to merge into general traffic lanes, significantly slowing down transit service and making riders’ trips longer. That’s where a company like Hayden AI comes in. “Hayden AI uses artificial intelligence and machine learning algorithms to detect and process illegal parking in bus lanes in real-time so that cities can take proactive measures to address the problem ,” Carson observes.

Illegal parking in bus lanes is a huge problem for transit agencies. Hayden AI works with transit ... [+] agencies to fix this problem by installing its AI-powered camera systems on buses to conduct automated enforcement of parking violations in bus lanes

In this case, an AI-powered camera system is installed on each bus. The camera system uses computer vision to “watch” the street for illegal parking in the bus lane. When it detects a traffic violation, it sends the data back to the parking authority. This allows the parking authority to take action, such as sending a ticket to the offending vehicle’s owner.

The effectiveness of AI is entirely dependent on how you use it. As former Accenture chief technology strategist Bob Suh notes in the Harvard Business Review, problem-solving is best when combined with AI and human ingenuity. “In other words, it’s not about the technology itself; it’s about how you use the technology that matters. AI is not a panacea for all ills. Still, when incorporated into a company’s problem-solving repertoire, it can be an enormously powerful tool,” concludes Terence Mauri, founder of Hack Future Lab, a global think tank.

Split the Responsibility

Huda Khan, an academic researcher from the University of Aberdeen, believes that AI is critical for international companies’ success, especially in the era of disruption. Khan is calling international marketing academics’ research attention towards exploring such transformative approaches in terms of how these inform competitive business practices, as are international marketing academics Michael Christofi from the Cyprus University of Technology; Richard Lee from the University of South Australia; Viswanathan Kumar from St. John University; and Kelly Hewett from the University of Tennessee. “AI is very good at automating repetitive tasks, such as customer service or data entry. But it’s not so good at creative tasks, such as developing new products,” Khan says. “So, businesses need to think about what tasks they want to automate and what tasks they want to keep for humans.”

Khan believes that businesses need to split the responsibility between AI and humans. For example, Hayden AI’s system is highly accurate and only sends evidence packages of potential violations for human review. Once the data is sent, human analysis is still needed to make the final decision. But with much less work to do, government agencies can devote their employees to tasks that can’t be automated.

Backed up by efficient, effective data analysis, human problem-solving can be more innovative than ever. Like all business transitions, developing the best system for combining human and AI work might take some experimentation, but it can significantly impact future success. For example, if a company is trying to improve its customer service, it can use AI startup Satisfi’s natural language processing technology . This technology can understand a customer’s question and find the best answer from a company’s knowledge base. Likewise, if a company tries to increase sales, it can use AI startup Persado’s marketing language generation technology . This technology can be used to create more effective marketing campaigns by understanding what motivates customers and then generating language that is more likely to persuade them to make a purchase.

Look at the Big Picture

A technological solution can frequently improve performance in multiple areas simultaneously. For instance, Hayden AI’s automated enforcement system doesn’t just help speed up transit by keeping bus lanes clear for buses; it also increases data security by limiting how much data is kept for parking enforcement, which allows a city to increase the efficiency of its transportation while also protecting civil liberties.

This is the case with many technological solutions. For example, an e-commerce business might adopt a better data architecture to power a personalized recommendation option and benefit from improved SEO. As a leader, you can use your big-picture view of your company to identify critical secondary benefits of technologies. Once you have the technologies in use, you can also fine-tune your system to target your most important priorities at once.

In summary, AI technology is constantly evolving, becoming more accessible and affordable for businesses of all sizes. By harnessing the power of AI, leaders can make better decisions, improve efficiency, and drive innovation. However, it’s important to remember that AI is not a silver bullet. Therefore, organizations must use AI and humans to get the best results.

Benjamin Laker

  • Editorial Standards
  • Reprints & Permissions
  • Speakers & Mentors
  • AI services

Exploring Problem Reduction in AI Techniques and Applications

In recent years, artificial intelligence (AI) has made significant advancements, allowing machines to perform tasks that were once considered exclusive to human intelligence. However, AI systems often struggle with complex problems due to their limited capacity and resources. This is where problem reduction techniques in AI come into play. Problem reduction is the process of simplifying a complex problem into a set of simpler sub-problems that can be solved individually. This allows AI systems to apply more efficient and effective solution strategies. To achieve this, the structure of the problem is analyzed and its constituent elements are identified. To effectively solve complex problems, it is necessary to have a deep understanding of the problem domain and the specific constraints and dependencies involved. Once the problem is broken down, AI algorithms can be applied to each sub-problem separately, leveraging their strengths and optimizing the overall solution. By reducing the problem, AI systems can tackle complex problems more effectively and efficiently. The capacity to break down a problem into smaller sub-problems enables AI to utilize existing knowledge and solution strategies, hastening the problem-solving process. Additionally, problem reduction empowers AI systems to manage large-scale problems by distributing the computational load across multiple processors or machines.

Definition and Importance of Problem Reduction in AI

In the field of artificial intelligence, problem reduction is a crucial concept that plays a fundamental role in problem-solving. It involves breaking down a complex problem into smaller, more manageable subproblems to find a solution. This approach enables artificial intelligence systems to handle complex tasks more efficiently and effectively. Problem reduction is a crucial technique in artificial intelligence that enables systems to apply specific problem-solving methods to each subproblem, making it easier to find a solution for the overall problem.   This technique allows AI systems to handle a wide range of problems, from simple to highly complex ones, by dividing them into manageable parts. This approach enables efficient use of computational resources and reduces the complexity of solving complex problems. Problem reduction also promotes modularity and reusability in AI systems. Once a subproblem is solved, the solution can be reused for similar problems, saving time and computational resources. Furthermore, problem reduction enables AI systems to handle uncertainty and incomplete information by focusing on specific aspects of a problem. In conclusion, problem reduction is a critical concept in artificial intelligence that enables the effective and efficient solving of complex problems. It allows AI systems to divide problems into manageable subproblems, apply problem-solving techniques to each subproblem, and promote modularity and reusability. Overall, problem reduction plays a vital role in advancing the capabilities of artificial intelligence systems.

Problem Reduction Techniques in AI

In artificial intelligence, problem reduction is a powerful technique used to break down complex problems into smaller, more manageable subproblems, illustrating the concept of problem solving in artificial intelligence. This approach allows AI systems to solve problems by breaking them down into simpler, solvable components. The basic idea behind problem reduction is to identify a large problem and then repeatedly break it down into smaller, more solvable problems until a base case is reached. Each subproblem is solved independently, and the solutions are combined to form a solution to the original problem. There are several problem reduction techniques commonly used in AI, including 1. Subgoal decomposition: This technique involves decomposing a problem into subgoals, where each subgoal represents a simpler task or problem to be solved. Subgoal decomposition allows the AI system to focus on solving one subgoal at a time, gradually working toward the solution of the main problem. 2. Means-ends analysis: Means-ends analysis is a problem-solving approach that involves identifying the difference between the current state and the goal state, and then finding ways to reduce that difference step by step. The AI system uses a set of operators or actions to move from the current state to the goal state, iteratively narrowing the problem until a solution is found. 3. Divide and Conquer: This technique involves dividing a problem into smaller, independent subproblems, solving each subproblem separately, and then combining the solutions to obtain the final solution. Divide and conquer is particularly useful for large problems because it allows for parallelization and efficient computation. 4. Macro Operators: Macro operators are predefined procedures or sequences of actions that simplify the problem-solving process. These higher-level operators allow the AI system to treat a complex task as a single step, reducing the complexity of the problem and improving efficiency. Overall, problem reduction techniques play a critical role in artificial intelligence, enabling AI systems to tackle complex problems by breaking them down into smaller, more manageable components. These techniques provide a systematic and efficient approach to problem solving, enhancing the capabilities of AI systems in various domains.

Advantages of Problem Reduction Approaches

Problem reduction is a fundamental concept in artificial intelligence that offers several advantages for solving complex problems. In this approach, a complex problem is broken down into smaller, more manageable subproblems, making it easier to find a solution. One of the main benefits of problem reduction approaches is their ability to simplify the problem-solving process. By breaking down a problem into smaller pieces, each component becomes easier to understand and analyze. This allows AI systems to focus their efforts on solving specific subproblems, increasing the efficiency and effectiveness of the overall solution. Another advantage of problem reduction approaches is their flexibility. This approach allows AI systems to adapt and adjust their problem-solving strategies based on the specific requirements of each subproblem. It enables the system to switch between different problem-solving methods and algorithms, optimizing the solution for each particular subproblem. In addition, problem reduction approaches promote modularity and reusability. By decomposing a problem into smaller subproblems, it becomes possible to reuse solutions from previous subproblems in different contexts. This not only saves time and effort, but also facilitates knowledge transfer and sharing within the AI system. Furthermore, problem reduction approaches facilitate collaboration and cooperation between different AI systems or agents. By decomposing a complex problem into smaller subproblems, it becomes easier to assign specific tasks to different agents or systems, allowing them to work together toward a common goal. In summary, problem reduction approaches offer several advantages for artificial intelligence systems. They simplify the problem-solving process, increase flexibility and adaptability, promote modularity and reusability, and facilitate collaboration and cooperation. By exploiting these advantages, AI systems can solve complex problems more effectively and efficiently.

Challenges in Problem Reduction

Problem reduction is a fundamental concept in artificial intelligence, which involves breaking down complex problems into smaller, more manageable sub-problems. While problem reduction has proven to be an effective approach in solving a wide range of tasks, there are several challenges that researchers and developers face in its implementation.

1. Problem Complexity

One of the main challenges in problem reduction is dealing with the complexity of the original problem. Real-world problems often have many variables, dependencies, and constraints, which can make it difficult to identify the right sub-problems to focus on. It requires a deep understanding of the problem domain and the ability to analyze and decompose the problem effectively, which are crucial aspects of problem solving in artificial intelligence. For example: In a complex optimization problem, such as resource allocation in a large organization, there are multiple variables to consider, including budget constraints, resource availability, and employee preferences. Identifying the right sub-problems and defining the relationships between them can be a daunting task.

2. Sub-problem Interaction

Another challenge in problem reduction is managing the interaction between sub-problems. In some cases, solving one sub-problem may have an impact on the solution to other sub-problems. This necessitates a careful coordination of the sub-problems to ensure that the solutions are compatible and consistent. For example: In a logistics planning problem, optimizing the route for one delivery may affect the routes for other deliveries. The sub-problems of optimizing individual routes need to be coordinated to ensure the overall efficiency and effectiveness of the logistics operation. Addressing these challenges requires advanced problem-solving techniques, such as heuristic search, constraint satisfaction, and constraint optimization. It also requires a good understanding of the problem domain and close collaboration between domain experts and AI developers.

Problem Reduction vs. Other AI Approaches

Problem reduction is a popular approach used in the field of artificial intelligence to tackle complex problems. It involves breaking down a large problem into smaller, more manageable sub-problems that can be solved individually. This approach focuses on finding solutions for each sub-problem and then combining them to solve the overall problem. One key advantage of problem reduction is that it allows for a systematic and modular problem-solving process. By dividing a problem into smaller components, it becomes easier to understand and analyze each component separately. This can lead to more efficient and effective solutions. Compared to other AI approaches, such as brute force or heuristic search, problem reduction offers a more structured and organized approach. Brute force methods involve exhaustively exploring all possible solutions, which can be computationally expensive and time-consuming. Heuristic search methods, on the other hand, use rules or algorithms, a key aspect of problem solving in artificial intelligence, to guide the search towards promising solutions. While these approaches can be effective in certain scenarios, they may not be as efficient as problem reduction when dealing with complex problems. Problem reduction also has connections to other AI techniques, such as planning and constraint satisfaction. Planning involves creating a sequence of actions that lead to a desired goal state, while constraint satisfaction focuses on finding consistent assignments to variables given a set of constraints. These techniques can be integrated with problem reduction to further enhance the solving process. Overall, problem reduction provides a systematic and modular approach to problem-solving in artificial intelligence. It offers advantages such as efficiency, organization, and compatibility with other AI techniques. Despite its strengths, it’s important to note that problem reduction may not be suitable for all types of problems. Different AI approaches should be considered depending on the specific problem and its characteristics.

Applications of Problem Reduction in Various Fields

Problem reduction, a technique widely used in artificial intelligence, has found applications in various fields. This powerful approach simplifies complex problems by breaking them down into smaller, more manageable subproblems. By reducing the complexity of a problem, it becomes easier to analyze and solve.

1. Aerospace Engineering

Aerospace engineers often utilize problem reduction to tackle the challenges they face in designing and developing aircraft and spacecraft. By breaking down complex aerodynamic problems into smaller components, engineers can focus on solving each subproblem individually. This results in a more efficient design process and improved performance of aerospace vehicles.

2. Healthcare

In the field of healthcare, problem reduction plays a crucial role in diagnosis and treatment. By decomposing complex medical conditions into simpler symptoms and subproblems, doctors can better understand the underlying causes and develop effective treatment plans. Problem reduction also aids in medical research, helping scientists uncover the mechanisms behind diseases and develop new therapies.

These are just a few examples of how problem reduction is applied in various fields. By leveraging the power of artificial intelligence and problem reduction techniques, professionals can solve complex problems more efficiently and effectively.

Problem Reduction and Knowledge Representation

Problem reduction is an essential aspect of artificial intelligence, as it involves breaking down complex problems into simpler, more manageable subproblems. By reducing a problem into smaller components, it becomes easier for an AI system to solve the overall problem. In order to effectively perform problem reduction, it is important to have a proper knowledge representation. Knowledge representation encompasses the methods and techniques used to organize and store information in a way that can be easily manipulated by an artificial intelligence system.

The Role of Knowledge Representation in Problem Reduction

Knowledge representation plays a crucial role in problem reduction by providing a structured framework for representing and organizing information. It allows an AI system to capture and store relevant knowledge about the problem domain, including facts, rules, and relationships. With an appropriate knowledge representation, an AI system can effectively break down complex problems into smaller subproblems. This allows the system to focus on solving each subproblem individually, which can greatly simplify the overall problem-solving process.

Benefits of Problem Reduction and Knowledge Representation

Problem reduction and knowledge representation offer several benefits in the field of artificial intelligence. By breaking down a problem into smaller subproblems, an AI system can effectively manage complexity and improve problem-solving efficiency. Furthermore, knowledge representation enables an AI system to reason about the problem domain and make intelligent decisions based on available information. It allows the system to handle uncertainty, make inferences, and learn from past experiences, ultimately enhancing its overall intelligence. Overall, problem reduction and knowledge representation are fundamental concepts in the field of artificial intelligence. They provide a framework for breaking down complex problems and organizing information, ultimately enabling AI systems to solve problems more efficiently and exhibit intelligent behavior.

Historical Development of Problem Reduction in AI

The field of Artificial Intelligence (AI) has always been focused on finding efficient ways to solve complex problems. One approach that has been extensively studied and developed over the years is problem reduction. Problem reduction in AI can be traced back to the early years of the field. Researchers and scientists realized that solving complex problems required breaking them down into smaller, more manageable sub-problems. One of the earliest examples of problem reduction in AI was the development of the “General Problem Solver” (GPS) in the 1950s by Herbert A. Simon and Allen Newell. GPS was an early attempt at creating a computer program that could solve a wide variety of problems by breaking them down into simpler sub-problems. Over the years, problem reduction techniques have continued to evolve and improve. Researchers have developed various algorithms and heuristics to efficiently break down complex problems into smaller parts. A key breakthrough in problem reduction came in the 1970s with the development of the STRIPS (Stanford Research Institute Problem Solver) planning system. STRIPS introduced the concept of decomposing a problem into a series of states and actions, allowing AI systems to search for an optimal solution step-by-step. In the following decades, researchers further advanced problem reduction techniques by incorporating concepts from logic and mathematical optimization. This led to the development of algorithms such as A* search and constraint satisfaction, which are widely used in AI applications today. Today, problem reduction techniques continue to play a crucial role in AI, exemplifying the importance of problem solving in artificial intelligence. They enable AI systems to tackle complex real-world problems by breaking them down into smaller, more manageable pieces. This approach allows for more efficient problem-solving and has contributed to the development of AI applications in various fields, including robotics, natural language processing, and data analysis.

Problem Reduction and Expert Systems

Problem reduction is a fundamental concept in artificial intelligence (AI) that plays a crucial role in expert systems. Expert systems are AI systems that emulate the decision-making capabilities of human experts in specific domains. These systems rely on problem reduction techniques to analyze complex problems and provide expert advice or solutions. The process of problem reduction involves breaking down a complex problem into smaller, more manageable sub-problems. This allows the expert system to focus on solving each sub-problem individually, leading to a more efficient and effective problem-solving process. Problem reduction helps to eliminate redundancy and unnecessary computations, making the problem-solving task more manageable. In expert systems, problem reduction is achieved through the use of knowledge representation techniques such as rules and facts. The expert system is designed to store and manipulate knowledge about the specific domain it operates in. This knowledge is then used to identify relevant sub-problems and apply problem reduction techniques to solve them. Expert systems leverage problem reduction to provide expert advice or solutions to specific problems. By breaking down complex problems and solving them individually, expert systems can offer targeted and specialized advice that is tailored to the specific needs of the user. This can be particularly useful in domains where human expertise is scarce or expensive. Overall, problem reduction is a crucial component of expert systems in artificial intelligence. It enables these systems to effectively analyze complex problems and provide expert advice or solutions. By breaking down problems into smaller sub-problems, expert systems can offer targeted and specialized solutions, making them valuable tools in various domains.

Problem Reduction and Machine Learning

Problem reduction is a fundamental concept in artificial intelligence that helps simplify complex problems and make them more manageable. It involves breaking down a problem into smaller, more solvable subproblems by identifying and eliminating irrelevant information. Machine learning, on the other hand, is a subfield of artificial intelligence that focuses on developing algorithms and models that allow computers to learn from and make predictions or decisions based on data. It involves training a computer system to automatically learn and improve from experience without being explicitly programmed.

Combining Problem Reduction and Machine Learning

Problem reduction can be greatly enhanced when combined with machine learning techniques. By using machine learning algorithms, a computer can analyze large sets of data and identify patterns or correlations that may not be obvious to human intelligence. Machine learning can help in the process of problem reduction by automatically identifying and eliminating irrelevant features or variables that may not contribute to solving the problem at hand. This can significantly reduce the complexity of the problem and make it easier to solve.

Benefits of Problem Reduction and Machine Learning

  • Improved problem-solving efficiency: By breaking down complex problems into smaller, more manageable subproblems, problem reduction enables more efficient problem-solving. Machine learning further enhances this process by automating the identification and elimination of irrelevant information.
  • Better decision-making: Machine learning algorithms can learn from past data and make predictions or decisions based on this knowledge. By combining problem reduction and machine learning, artificial intelligence systems can make more accurate and informed decisions.
  • Enhanced scalability: Problem reduction and machine learning techniques can be applied to various domains and scales. They can be used to solve problems ranging from small-scale, specific tasks to large-scale, complex problems.

In conclusion, problem reduction and machine learning are complementary approaches in artificial intelligence that can greatly enhance the efficiency and effectiveness of problem-solving. By combining these techniques, we can overcome the challenges posed by complex problems and leverage the power of artificial intelligence in various domains.

Problem Reduction and Natural Language Processing

Problem Reduction is a fundamental concept in the field of artificial intelligence (AI). It is the process of breaking down a complex problem into smaller, more manageable sub-problems, which can then be solved independently. Natural Language Processing (NLP) is an area of AI that focuses on the interaction between computers and human language. By combining problem reduction techniques with natural language processing, AI systems can effectively understand and process human language to solve complex problems. NLP allows AI systems to analyze and interpret human text, enabling them to extract meaning, understand context, and generate appropriate responses.

Benefits of Problem Reduction in NLP

Problem reduction in NLP offers several benefits. Firstly, it enables AI systems to handle a wide range of natural language inputs, including different sentence structures, vocabulary, and grammar. This flexibility allows AI systems to understand and respond to user queries and commands effectively. Secondly, problem reduction in NLP can help overcome the ambiguity and uncertainty inherent in human language. By breaking down complex sentences into smaller sub-problems, AI systems can analyze each component individually and make more accurate interpretations. This leads to improved accuracy and reliability in language understanding and processing.

Applications of Problem Reduction and NLP

The combination of problem reduction and NLP has numerous applications across various domains. In customer service, AI-powered chatbots can use problem reduction techniques, a form of problem solving in artificial intelligence, to understand and respond to customer inquiries, providing quick and accurate assistance. Furthermore, problem reduction in NLP can be applied in information retrieval systems, allowing users to search and access relevant information more effectively. By breaking down user queries into sub-problems, AI systems can provide more precise and relevant search results. In addition, problem reduction and NLP can be used in machine translation systems to improve accuracy and fluency in translating between different languages. By analyzing and reducing complex sentences, AI systems can generate more accurate and coherent translations. In conclusion, the combination of problem reduction and natural language processing plays a critical role in advancing artificial intelligence. It enables AI systems to effectively understand and process human language, leading to improved accuracy and efficiency in solving complex problems.

Problem Reduction and Robotics

Intelligence in robotics is a field that combines artificial intelligence and problem reduction techniques to enhance the capabilities of robots. Problem reduction, a fundamental approach in artificial intelligence, plays a crucial role in improving the problem-solving abilities of robots. By defining the problem in a structured manner, robots can apply problem reduction techniques to break down complex tasks into smaller, more manageable sub-problems. This allows robots to efficiently analyze and solve problems by reducing them to simpler components, narrowing down the search space and expediting the decision-making process. Furthermore, problem reduction enables robots to effectively interact with their environment and adapt to changing circumstances. Robots can identify obstacles, constraints, and other factors that affect their task performance, and then use problem reduction to tackle these challenges systematically. This approach allows robots to handle uncertainties and make informed decisions to complete their tasks successfully. Moreover, problem reduction techniques can optimize the use of available resources, such as time, energy, and computational power, by focusing on the most critical aspects of a problem. By reducing unnecessary complexity, robots can streamline their operations and achieve optimal performance. In summary, problem reduction, when combined with artificial intelligence, empowers robots with enhanced problem-solving capabilities and enables them to adapt to changing environments. By breaking down complex tasks, identifying obstacles, and optimizing resource utilization, robots can efficiently tackle a wide range of challenges and contribute to various fields, from manufacturing and logistics to healthcare and exploration.

Problem Reduction and Computer Vision

In the field of artificial intelligence, problem reduction is a fundamental concept that is often used in computer vision. Computer vision refers to the ability of computers to interpret and understand visual information, such as images and videos. Problem reduction in computer vision involves breaking down complex visual tasks into smaller, more manageable problems that can be solved using algorithms and computational methods. This approach allows computers to analyze and process visual data more efficiently and accurately. One example of problem reduction in computer vision is object recognition. Object recognition involves the identification and classification of objects in an image or video. By breaking down this task into smaller subproblems, such as detecting edges and shapes, computers can more easily recognize and categorize objects. Another example of problem reduction in computer vision is image segmentation. Image segmentation involves dividing an image into meaningful and distinct regions. By breaking down this task into smaller subproblems, such as color clustering and edge detection, computers can accurately segment an image, which is useful in various applications like medical imaging and video surveillance. Problem reduction is essential in computer vision because it allows computers to tackle complex visual tasks effectively. By breaking down a problem into smaller parts, computers can focus on solving each part individually and then combining the results to solve the overall problem. In conclusion, problem reduction plays a crucial role in computer vision by enabling computers to analyze and interpret visual information. It allows for the efficient processing of visual data and the solving of complex visual tasks. By applying problem reduction techniques, computers can enhance their ability to understand and interact with the visual world around them.

Problem Reduction and Data Science

Data science is a rapidly growing field in artificial intelligence that uses problem reduction techniques to extract insights and knowledge from large datasets. Problem reduction, a fundamental concept in artificial intelligence, involves breaking down complex problems into simpler, more manageable sub-problems. Data scientists use problem reduction methods to analyze and interpret data in order to solve real-world problems. By breaking down a problem into smaller components, data scientists can develop algorithms and models that can effectively handle large amounts of data, uncover patterns, and make predictions.

Benefits of Problem Reduction in Data Science

Problem reduction is an essential tool in data science because it allows data scientists to tackle complex problems that would otherwise be too difficult to handle. By breaking down a problem into smaller, more manageable parts, data scientists can focus on solving each part individually, which ultimately leads to a more efficient and effective solution. Problem reduction also enables data scientists to discover hidden insights and patterns within large datasets. By breaking down a problem into its constituent parts, data scientists can analyze each part separately and then combine the results to gain a holistic understanding of the problem.

Challenges in Problem Reduction for Data Science

While problem reduction is a powerful approach in data science, it is not without its challenges. One of the main challenges is determining the optimal decomposition of a problem into sub-problems. This requires expertise in both the domain of the problem being solved and the tools and techniques used in data science. Another challenge is dealing with the inherent complexity of real-world problems. Many problems encountered in data science, such as predicting customer behavior or analyzing social media sentiment, involve a high degree of complexity and uncertainty. Data scientists must carefully choose how to decompose the problem and ensure that the sub-problems are solvable with the available data and resources. In conclusion, problem reduction plays a crucial role in data science by enabling data scientists to break down complex problems into smaller, more manageable sub-problems. This approach allows for better analysis, interpretation, and understanding of large datasets, leading to more effective solutions and valuable insights.

Problem Reduction and Optimization

In the field of artificial intelligence, problem reduction is a key concept that involves simplifying complex problems into smaller, more manageable ones. This approach allows AI systems to analyze and solve problems by breaking them down into smaller, interconnected subproblems. Problem reduction works by identifying and eliminating redundant or irrelevant information, focusing on the essential aspects of a problem. This process helps AI systems avoid unnecessary computations and improve problem-solving efficiency. Optimization, on the other hand, aims to find the best solution among a set of feasible alternatives. It involves evaluating and comparing different solutions based on specific criteria or constraints. In the context of problem reduction, optimization techniques can be applied to improve the efficiency and effectiveness of the solution space exploration. By combining problem reduction and optimization techniques, artificial intelligence systems can achieve faster and more accurate problem-solving capabilities. The reduced problem size allows for more efficient exploration of the solution space, while optimization techniques help identify the most optimal solution within that space. In conclusion, problem reduction and optimization are essential components of artificial intelligence systems. They enable the analysis and solution of complex problems by simplifying them and improving the efficiency of finding the best possible solution.

Problem Reduction and Decision Making

Problem reduction is a fundamental concept in artificial intelligence that plays a crucial role in decision making. It refers to the process of breaking down a complex problem into smaller, more manageable sub-problems. By decomposing a problem into smaller parts, it becomes easier to solve and analyze each component individually, leading to a more efficient decision-making process.

Benefits of Problem Reduction

There are several benefits to employing problem reduction techniques in decision making. Firstly, it allows for a more systematic and structured approach to problem-solving. By breaking down the problem into smaller pieces, the complexity of the overall problem is reduced, making it easier to identify potential solutions. Secondly, problem reduction enables better utilization of resources. By breaking down a problem into smaller parts, it becomes possible to assign specific resources and expertise to each component, ensuring that the most appropriate resources are allocated to each sub-problem. This leads to a more efficient allocation of resources and increases the chances of finding an optimal solution.

Role of Problem Reduction in Decision Making

Problem reduction is an essential component of the decision-making process in artificial intelligence. By decomposing a complex problem, it becomes easier to analyze and evaluate potential solutions. Each sub-problem can be tackled individually, allowing for a more focused examination of possible outcomes and their implications. Furthermore, problem reduction facilitates the identification of relevant information and variables that are critical to making an informed decision. By breaking down the problem, decision makers can identify the key variables and factors that need to be considered. This helps in prioritizing information and focusing on the most critical aspects of the problem. In conclusion, problem reduction plays a vital role in decision making in artificial intelligence. By breaking down complex problems into smaller, more manageable sub-problems, it allows for a more systematic and efficient approach to decision making. Problem reduction enables better allocation of resources, facilitates the analysis of potential solutions, and helps in identifying critical variables for informed decision making.

Problem Reduction and Game Theory

In the field of artificial intelligence (AI), problem reduction is a fundamental concept that plays a significant role in problem-solving techniques. Problem reduction, also known as problem decompositions, involves breaking down complex problems into more easily solvable sub-problems. This approach allows AI algorithms to efficiently solve complex tasks by dividing them into smaller, manageable components. Game theory, on the other hand, is a branch of mathematics and economics that studies strategic decision-making in competitive situations. It provides a framework to analyze the interactions between multiple players or entities and predict their behavior and outcomes. Game theory is employed in various fields, including economics, politics, and biology, to model and understand complex systems. When problem reduction and game theory come together, they create a powerful toolset for AI researchers and practitioners. Game theory can be utilized to model complex decision-making scenarios, especially in multi-agent systems, where multiple AI agents interact and make decisions. By understanding the strategies and dynamics of the game, problem reduction techniques can be applied to break down the complex decision-making problem into smaller, manageable sub-problems. By decomposing the problem using problem reduction techniques and applying game theory to model the interactions, AI algorithms can make more informed and optimal decisions. This approach not only improves the efficiency and accuracy of AI systems but also enables them to handle complex real-world scenarios effectively.

In conclusion, problem reduction and game theory are two complementary concepts that enhance the capabilities of artificial intelligence systems. By combining problem reduction techniques with game theory models, AI algorithms can efficiently solve complex decision-making problems and achieve optimal outcomes.

Ethical Considerations in Problem Reduction

As artificial intelligence continues to advance, the use of problem reduction algorithms poses ethical considerations that must be carefully addressed. Problem reduction, as an approach to solving complex problems, involves breaking them down into smaller, more manageable subproblems. While this approach can lead to more efficient problem-solving, it also raises ethical concerns. One of the main ethical dilemmas is the potential for biased problem reduction. If the algorithms used in problem reduction are not properly designed and trained, they may exacerbate pre-existing biases in the data or introduce new biases. This can result in discriminatory or unfair outcomes, particularly in areas such as criminal justice or healthcare, where decisions made based on problem reduction algorithms can have significant impacts on individuals’ lives. Another ethical concern is the transparency and accountability of problem reduction algorithms. If the algorithms used in problem reduction are proprietary or their inner workings are not transparent, it becomes difficult to assess their fairness and accuracy. Lack of transparency also raises concerns about the potential for algorithmic manipulation or misuse by those with access to them, without an opportunity for independent scrutiny. Furthermore, the use of problem reduction algorithms can raise privacy concerns. These algorithms often require access to large amounts of data, including personal information, in order to effectively break down complex problems. This raises questions about how this data is collected, stored, and used, and whether individuals’ privacy rights are being respected. It is important that adequate safeguards are in place to protect the privacy of individuals and prevent unauthorized access or misuse of data. Addressing these ethical considerations requires a multi-faceted approach. It involves designing problem reduction algorithms that are fair, transparent, and accountable, and ensuring that they are trained on unbiased data. It also requires establishing guidelines and regulations to govern the use of problem reduction algorithms in sensitive areas, such as criminal justice and healthcare. Additionally, organizations and researchers must prioritize data privacy and take steps to protect individuals’ personal information. In conclusion, while problem reduction algorithms have the potential to greatly enhance problem-solving in artificial intelligence, their use raises important ethical considerations that must be carefully addressed. By addressing issues of bias, transparency, accountability, and privacy, we can ensure that problem reduction algorithms are used responsibly and ethically to benefit society.

Future Directions of Problem Reduction in AI

As the field of artificial intelligence continues to grow and evolve, there are several exciting future directions for problem reduction techniques. One area of interest is the development of more advanced algorithms and methodologies for problem reduction. Currently, most problem reduction approaches rely on heuristic-based methods, which can be limited in their ability to solve complex problems. Future research may explore new algorithmic techniques, such as machine learning and deep learning, to enhance problem reduction capabilities. Another direction for future development is the integration of problem reduction techniques with other AI methodologies. Problem reduction can be combined with other approaches, such as search algorithms or probabilistic reasoning, to create more robust and efficient AI systems. This integration could lead to improved problem-solving capabilities and overall performance. Additionally, future research may focus on the scalability and efficiency of problem reduction techniques. As AI applications become increasingly complex and data-intensive, it is essential to develop problem reduction methods that can handle large-scale problems efficiently. This could involve the use of distributed computing or parallel processing techniques. Lastly, the future of problem reduction in AI may involve addressing the ethical and societal implications of using AI systems. As AI technologies become more integrated into everyday life, it is essential to consider the potential risks and biases associated with problem reduction methods. Future research may explore ways to mitigate these risks and develop ethical frameworks for the use of problem reduction in AI.

Question-Answer:

What is problem reduction in artificial intelligence.

Problem reduction in artificial intelligence refers to the process of breaking down complex problems into smaller, more manageable subproblems, in order to find a solution. This approach allows AI systems to tackle large, difficult problems by dividing them into smaller pieces that can be solved individually.

How does problem reduction work?

Problem reduction works by decomposing a complex problem into smaller subproblems, which can then be solved individually or in a specific order. The solutions to the subproblems are then combined to obtain a solution to the overall problem. This approach simplifies the problem-solving process and allows AI systems to efficiently find solutions to complex problems.

What are the advantages of problem reduction in AI?

Problem reduction offers several advantages in AI. It allows for the efficient decomposition of complex problems, making them easier to solve. It also enables AI systems to solve larger problems by dividing them into smaller, more manageable subproblems. Additionally, problem reduction helps in identifying redundant or irrelevant information, improving the effectiveness of the problem-solving process.

Can problem reduction be applied to all types of problems?

While problem reduction is a powerful technique, it may not be applicable to all types of problems. Some problems may be inherently hard to decompose or may not benefit from the decomposition process. In such cases, alternative problem-solving approaches may be more suitable. However, problem reduction is a widely used and effective technique for many types of problems in artificial intelligence.

Are there any limitations of problem reduction in AI?

Problem reduction does have some limitations. In some cases, the decomposition of a problem may not be straightforward or may introduce additional complexity. There is also the issue of combining the solutions to subproblems in an optimal way to obtain the overall solution. Additionally, problem reduction may not be suitable for problems that require a holistic approach or have interdependencies between subproblems. However, these limitations can often be overcome by carefully designing the problem decomposition and solution integration processes.

What is problem reduction in AI?

Problem reduction in AI refers to the process of simplifying complex problems into smaller, more manageable components to streamline computation and enhance efficiency in artificial intelligence systems.

How do problem reduction algorithms work in AI?

Problem reduction algorithms in AI break down intricate problems by identifying patterns, dependencies, and relationships within the data to reduce the overall complexity of the task at hand.

What are some common techniques used for problem reduction in AI?

Techniques such as divide and conquer, abstraction, heuristic methods, and constraint satisfaction are commonly employed for problem reduction in AI to simplify computation and improve performance.

How does problem reduction benefit AI systems?

Problem reduction helps AI systems by optimizing computational resources, reducing processing time, increasing accuracy in decision-making, and enhancing overall system efficiency.

Can problem reduction be applied in computer vision in AI?

Yes, problem reduction techniques can be effectively applied in computer vision in AI to break down complex visual recognition tasks into smaller components for better analysis and understanding.

What role does problem reduction play in improving AI performance?

Problem reduction plays a crucial role in enhancing AI performance by simplifying tasks, minimizing errors, optimizing resource utilization, and facilitating faster decision-making processes.

How do problem reduction techniques contribute to the development of AI applications?

Problem reduction techniques contribute to the development of AI applications by enabling more efficient algorithms, better problem-solving capabilities, enhanced scalability, and improved adaptability to new data inputs.

Related posts:

Default Thumbnail

About the author

' src=

Cohere: Bridging Language and AI for a Smarter Future

Microsoft office 365 ai. microsoft copilot, understanding the distinctions between artificial intelligence and human intelligence, exploring the impact of artificial intelligence on discrimination in insurance pricing and underwriting.

' src=

Advertisement

How AI mathematicians might finally deliver human-level reasoning

Artificial intelligence is taking on some of the hardest problems in pure maths, arguably demonstrating sophisticated reasoning and creativity – and a big step forward for AI

By Alex Wilkins

10 April 2024

New Scientist Default Image

Simon Danaher

In pure mathematics, very occasionally, breakthroughs arrive like bolts from the blue – the result of such inspired feats of reasoning and creativity that they seem to push the very bounds of intelligence . In 2016, for instance, mathematician Timothy Gowers marvelled at a solution to the cap set problem , which has to do with finding the largest pattern of points in space where no three points form a straight line. The proof “has a magic quality that leaves one wondering how on Earth anybody thought of it”, he wrote.

You might think that such feats are unique to humans. But you might be wrong. Because last year, artificial intelligence company Google DeepMind announced that its AI had discovered a better solution to the cap set problem than any human had . And that was just the latest demonstration of AI’s growing mathematical prowess. Having long struggled with this kind of sophisticated reasoning, today’s AIs are proving themselves remarkably capable – solving complex geometry problems, assisting with proofs and generating fresh avenues of attack for long-standing problems.

Can AI ever become conscious and how would we know if that happens?

All of which has prompted mathematicians to ask if their field is entering a new era. But it has also emboldened some computer scientists to suggest we are pushing the bounds of machine intelligence, edging ever closer to AI capable of genuinely human-like reasoning – and maybe even artificial general intelligence, AI that can perform as well as or better than humans on a wide range of tasks. “Mathematics is the language of reasoning,” says Alex Davies at DeepMind. “If models can…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

To continue reading, subscribe today with our introductory offers

No commitment, cancel anytime*

Offer ends 2nd of July 2024.

*Cancel anytime within 14 days of payment to receive a refund on unserved issues.

Inclusive of applicable taxes (VAT)

Existing subscribers

More from New Scientist

Explore the latest news, articles and features

Chatbots can persuade conspiracy theorists their view might be wrong

Subscriber-only

AI can spot parasites in stool samples to help diagnose infections

Watch mini humanoid robots showing off their football skills, why ais that tackle complex maths could be the next big breakthrough, popular articles.

Trending New Scientist articles

describe problem solving in artificial intelligence

OpenAI and Meta Close to Unleashing Super Smart AI

The race to develop artificial intelligence (AI) that can mimic human reasoning and problem-solving skills is heating up, with tech giants OpenAI and Meta at the forefront . These companies are on the cusp of releasing AI models that could revolutionize the way we interact with technology, signaling a leap towards achieving artificial general intelligence (AGI).

A New Era of AI on the Horizon

OpenAI's Chief Operating Officer, Brad Lightcap, recently shared insights with The Financial Times about the upcoming version of GPT, OpenAI's renowned AI model. Lightcap's revelations suggest significant advancements in the AI's ability to tackle "hard problems," such as reasoning.

"I think we're just starting to scratch the surface on the ability that these models have to reason," Lightcap explained, indicating a breakthrough in AI's cognitive capabilities. Similarly, Meta is not far behind in this technological race.

The company's Vice-President of AI Research, Joelle Pineau, hinted at the forthcoming Llama 3 model, which is anticipated to possess abilities like talking, reasoning, planning, and even having memory. These developments are pivotal, as they represent steps toward achieving AGI, a milestone that both Meta and OpenAI have set their sights on.

The Trillion-Dollar Dream and Safety Concerns

The quest for AGI is not just a scientific endeavor but a potentially lucrative industry. John Carmack, a former Meta executive and a pioneer in virtual reality, has labeled AGI as the "big brass ring" of AI, predicting it to become a trillion-dollar industry by the 2030s.

AGI, in its simplest definition, is AI that can perform at or above human levels across a broad spectrum of tasks. However, this ambitious pursuit raises substantial safety concerns. Prominent figures in the AI research community, including Yoshua Bengio and Geoffrey Hinton, have voiced apprehensions about the risks of surpassing human intelligence with technology.

Elon Musk, known for his cautionary stance on AI, has estimated that AI will outsmart humans within two years, with the "total amount of sentient compute" surpassing human capabilities in five years. These advancements and warnings paint a picture of a future where AI could dramatically transform society.

While the potential benefits are immense, ranging from solving complex global challenges to enhancing everyday life, the ethical and safety considerations are equally significant. As we stand on the brink of a new era in AI, the balance between innovation and responsibility has never been more crucial.

The development of AI models capable of human-like reasoning and problem-solving marks a turning point in the journey towards artificial general intelligence. With the promise of revolutionizing industries and the potential to unlock unprecedented economic value, the race towards AGI is undoubtedly one of the most exciting and consequential technological endeavors of our time.

Yet, it is imperative that we navigate this path with caution, mindfulness, and a commitment to safeguarding humanity's best interests.

Like our content? Follow Financial World on MSN.

The Post OpenAI and Meta Close to Unleashing Super Smart AI appeared first on Financial World

  • Related: Experts Clash on AI's Capability to Surpass Human Intelligence

OpenAI and Meta Close to Unleashing Super Smart AI

  • Reference Manager
  • Simple TEXT file

People also looked at

Editorial article, editorial: artificial intelligence and machine learning in pediatric surgery.

describe problem solving in artificial intelligence

  • 1 Department of Surgery, Division of Pediatric Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
  • 2 Department of Neonatology, Beatrix Children’s Hospital, University Medical Center Groningen, University of Groningen, Groningen, Netherlands

Editorial on the Research Topic Artificial intelligence and machine learning in pediatric surgery

1 Introduction

Cutting-edge technologies are leading to a profound transformation of healthcare ( 1 , 2 ). Among the most promising advancements is the integration of artificial intelligence (AI) into the delicate and critical domain of pediatrics ( 3 – 5 ). We find ourselves at the threshold of a new frontier, envisioning how AI might revolutionize every step of the pediatric patient journey. In this Research Topic, we embark on an exploration of how AI may shape the future of pediatrics and pediatric surgery in particular.

2 Artificial intelligence and machine learning in pediatrics

The term “artificial intelligence” was coined by John McCarthy in 1955, defining it as “the science and engineering to make intelligent machines” ( 6 ). Over the years, AI has evolved into a vast field of computer science, levering technologies like machine learning to perform tasks that were once thought to require human intelligence, such as problem-solving, pattern recognition, and decision-making ( 7 ). In pediatrics, where healthcare providers often face intricate tasks demanding advanced human intelligence, AI has emerged as a transformative ally. The most recent technologies brought forward by AI can provide valuable support by analyzing extensive patient data and offering predictive insights which can be incorporated into early warning systems ( 8 ). Moreover, AI holds the potential to assist medical professionals in making precise diagnoses and suggesting personalized treatment recommendations ( 9 – 11 ). Beyond this, AI's capabilities extend into the operating theatre, where it can provide real-time information, robotic assistance, and procedural guidance, further advancing the field of pediatric surgery ( 12 ).

3 AI throughout the pediatric patient journey

This Research Topic delves into AI's potential future role in pediatrics, stretching even to the prenatal phase with Lin et al. exploring AI's potential in the detection of genomic mutations in congenital surgical diseases Lin et al. By navigating through a number of innovative deep learning models that identify and prioritize variations from big genomic data, they showed that AI can help to detect and understand the potential impact of mutations on disease development. This proactive approach enables timely interventions including preventive or corrective measures before or shortly after birth.

As we progress through the patient journey, AI's influence expands into the childbirth process, assisting healthcare professionals in monitoring and optimizing maternal and neonatal care. For instance, AI can play a pivotal role in analyzing progression data to predict the necessity of procedures like caesarean sections ( 13 , 14 ). Post-birth, high-risk neonates undergo a series of diagnostic tests, including imaging scans and laboratory tests. Ongoing research into computer vision algorithms, utilizing convolutional neural networks for the analysis of medical images, holds the promise of more accurate diagnoses ( 15 , 16 ). Future developments may even witness integration of sophisticated multi-modal algorithms, combining diverse data sources for highly precise predictions of specific medical conditions ( 17 ).

Where preventative measures fall short and critical diseases manifest, AI can step in to facilitate informed decision-making. An interesting example lies in the use of Behavioral Artificial Intelligence Technology, showcased in the study by Van Varsseveld et al. illustrating its potential in supporting physicians with end-of-life decision-making for preterm infants with surgical necrotizing enterocolitis.

Within the operating theatre, AI's potential in guiding surgical procedures becomes increasingly evident. While technologies like the use of indocyanine green fluorescence have proven successful in open-surgery Esposito et al. , envisioning an AI-driven iteration might involve the integration of augmented reality (AR) technologies ( 18 , 19 ). These would enable the visualization of crucial landmarks and surgical paths, providing invaluable guidance to surgeons. Another notable example highlighted in this Research Topic involves machine learning algorithms distinguishing between ventral and dorsal roots during selective dorsal rhizotomy using electro-neurophysiological characteristics Jiang et al. Furthermore, robotic-assisted surgery is not untouched by AI's transformative impact, offering surgeons greater precision, facilitating minimally invasive surgeries, and contributing to reduced incisions, pain, and faster recovery times for pediatric patients ( 20 – 22 ). Natural language processing might allow for automatic surgery reporting, streamlining documentation by extracting key information from the surgical procedure and generating detailed reports ( 23 ).

In postoperative care, AI can play a pivotal role by analyzing patient data to predict and prevent complications, optimizing recovery strategies, and personalizing rehabilitation plans. Moreover, it can streamline healthcare processes by optimizing appointment planning and enhancing overall efficiency in healthcare facilities. In essence, AI's integration into pediatrics encompasses a spectrum of technologies and applications, revolutionizing surgical practices and ultimately improving outcomes for pediatric patients and their families.

4 Ethical, legal and societal aspects (ELSA) of applying AI throughout the patient journey

Integration of AI in pediatrics and pediatric surgery brings along a great deal of interconnected ethical, legal, and social concerns, which are of particular relevance when caring for preterm and critically ill newborns and children.

As the article by Till et al. in this Research Topic explained, optimization of the developed algorithms can have a significant impact on its usability Till et al. By performing a thorough pre-processing procedure, they significantly improved the radiological detection of wrist fractures. Hence, before implementation, it is imperative to carry out iterative development and testing procedures to ensure optimal performance and clinical relevance of the model. While doing so, factors like algorithm bias and transparency must be meticulously considered to prevent disproportionate impacts on the vulnerable pediatric populations ( 24 – 26 ). Additionally, robust safeguards are essential to protect patient privacy. Taking a societal perspective, the deployment of AI-tools must transcend socio-economic boundaries, advocating for equal access to these transformative technologies ( 27 ). Simultaneously, a proactive approach to education is essential, empowering individuals with the knowledge necessary to use the tools optimally ( 28 ). The seamless integration of AI in pediatrics also necessitates the cultivation of trust, ensuring its acceptance as valuable aid rather than a potential source of apprehension ( 29 , 30 ). On the legal front, AI prompts a reevaluation of medical liability and responsibility ( 31 – 33 ). Still, by navigating the social and legal dimensions conscientiously, a seamless integration of AI into pediatrics can be achieved, fostering advancements that benefit patients while upholding ethical, social, and legal standards.

5 Conclusion

In this Research Topic, we provide a glimpse of how AI could be possibly integrated into the pediatric patient journey. AI has the potential to be incorporated at any stage of this journey, catering to the specific needs and preferences of healthcare professionals, parents and patients alike. As we navigate the opportunities and challenges in this transformative era, the continuous evolution of AI holds the key to a future where technology becomes an even more indispensable ally in the pursuit of optimal pediatric care.

Author contributions

RV: Writing – original draft, Writing – review & editing. JH: Supervision, Writing – original draft, Writing – review & editing.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article.

This Editorial was supported by the For Wis(h)dom Foundation (Project 9, 02/02/2022, Baarn, The Netherlands).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med . (2019) 25(1):44–56. doi: 10.1038/s41591-018-0300-7

PubMed Abstract | Crossref Full Text | Google Scholar

2. Bohr A, Memarzadeh K. Chapter 2 - The rise of artificial intelligence in healthcare applications. In: Bohr A, Memarzadeh K, editors. Artificial Intelligence in Healthcare . Academic Press (2020). p. 25–60. doi: 10.1016/B978-0-12-818438-7.00002-2

Crossref Full Text | Google Scholar

3. Matsushita FY, Krebs VLJ, Carvalho WB. Artificial intelligence and machine learning in pediatrics and neonatology healthcare. Rev Assoc Med Bras (1992) . (2022) 68(6):745–50. doi: 10.1590/1806-9282.20220177

4. Malhotra A, Molloy EJ, Bearer CF, Mulkey SB. Emerging role of artificial intelligence, big data analysis and precision medicine in pediatrics. Pediatr Res . (2023) 93(2):281–3. doi: 10.1038/s41390-022-02422-z

5. Shah N, Arshad A, Mazer MB, Carroll CL, Shein SL, Remy KE. The use of machine learning and artificial intelligence within pediatric critical care. Pediatr Res . (2023) 93(2):405–12. doi: 10.1038/s41390-022-02380-6

6. McCarthy J. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. (1955). Available online at: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

7. Korteling JEH, van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC, Eikelboom AR. Human- versus artificial intelligence. Front Artif Intell . (2021) 4:622364. doi: 10.3389/frai.2021.622364

8. Smith JD, Smith AR, Smith LP, Allen RW, Smith CJ. Predictive analytics in healthcare using machine learning algorithms: a review. IEEE Access . (2020) 8:134783–814. doi: 10.1109/ACCESS.2020.3012618

9. Shen J, Zhang CJP, Jiang B, Chen J, Song J, Liu Z, et al. Artificial intelligence versus clinicians in disease diagnosis: systematic review. JMIR Med Inform . (2019) 7(3):e10010. doi: 10.2196/10010

10. Ng CKC. Diagnostic performance of artificial intelligence-based computer-aided detection and diagnosis in pediatric radiology: a systematic review. Children (Basel) . (2023) 10(3):525. doi: 10.3390/children10030525

11. Ashton JJ, Young A, Johnson MJ, Beattie RM. Using machine learning to impact on long-term clinical care: principles, challenges, and practicalities. Pediatr Res . (2023) 93(2):324–33. doi: 10.1038/s41390-022-02194-6

12. Mithany RH, Aslam S, Abdallah S, Abdelmaseeh M, Gerges F, Mohamed MS, et al. Advancements and challenges in the application of artificial intelligence in surgical arena: a literature review. Cureus . (2023) 15(10):e47924. doi: 10.7759/cureus.47924

13. Guedalia J, Lipschuetz M, Novoselsky-Persky M, Cohen SM, Rottenstreich A, Levin G, et al. Real-time data analysis using a machine learning model significantly improves prediction of successful vaginal deliveries. Am J Obstet Gynecol . (2020) 223(3):437.e1–437.e15. doi: 10.1016/j.ajog.2020.05.025

14. Guedalia J, Sompolinsky Y, Novoselsky Persky M, Cohen SM, Kabiri D, Yagel S, et al. Prediction of severe adverse neonatal outcomes at the second stage of labour using machine learning: a retrospective cohort study. BJOG . (2021) 128(11):1824–32. doi: 10.1111/1471-0528.16700

15. Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: state-of-the-art, comparisons, improvement and perspectives. Neurocomputing . (2021) 444:92–110. doi: 10.1016/j.neucom.2020.04.157

16. Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data . (2019) 6(1):1–18. doi: 10.1186/s40537-019-0276-2

17. Yan K, Li T, Marques JAL, Gao J, Fong SJ. A review on multimodal machine learning in medical diagnostics. Math Biosci Eng . (2023) 20(5):8708–26. doi: 10.3934/mbe.2023382

18. Fida B, Cutolo F, di Franco G, Ferrari M, Ferrari V. Augmented reality in open surgery. Updates Surg . (2018) 70(3):389–400. doi: 10.1007/s13304-018-0567-8

19. Dennler C, Bauer DE, Scheibler AG, Spirig J, Götschi T, Fürnstahl P, et al. Augmented reality in the operating room: a clinical feasibility study. BMC Musculoskelet Disord . (2021) 22(1):451. doi: 10.1186/s12891-021-04339-w

20. Mirnezami R, Ahmed A. Surgery 3.0, artificial intelligence and the next-generation surgeon. Br J Surg . (2018) 105(5):463–5. doi: 10.1002/bjs.10860

21. Peters BS, Armijo PR, Krause C, Choudhury SA, Oleynikov D. Review of emerging surgical robotic technology. Surg Endosc . (2018) 32(4):1636–55. doi: 10.1007/s00464-018-6079-2

22. Bhandari M, Zeffiro T, Reddiboina M. Artificial intelligence and robotic surgery: current perspective and future directions. Curr Opin Urol . (2020) 30(1):48–54. doi: 10.1097/MOU.0000000000000692

23. Bieck R, Wildfeuer V, Kunz V, Sorge M, Pirlich M, Rockstroh M, et al. Generation of surgical reports using keyword-augmented next sequence prediction. Curr Dir Biomed Eng . (2021) 7:387–90. doi: 10.1515/cdbme-2021-2098

24. Abràmoff MD, Tarver ME, Loyo-Berrios N, Trujillo S, Char D, Obermeyer Z, et al. Considerations for addressing bias in artificial intelligence for health equity. NPJ Digit Med . (2023) 6(1):170. doi: 10.1038/s41746-023-00913-9

25. McCradden MD, Joshi S, Mazwi M, Anderson JA. Ethical limitations of algorithmic fairness solutions in health care machine learning. Lancet Digit Health . (2020) 2(5):e221–3. doi: 10.1016/S2589-7500(20)30065-0

26. Boch S, Sezgin E, Lin Linwood S. Ethical artificial intelligence in paediatrics. The Lancet Child & Adolescent Health . (2022) 6(12):833–5. doi: 10.1016/S2352-4642(22)00243-7

27. McCoy LG, Banja JD, Ghassemi M, Celi LA. Ensuring machine learning for healthcare works for all. BMJ Health Care Inform . (2020) 27(3):e100237. doi: 10.1136/bmjhci-2020-100237

28. Tucci V, Saary J, Doyle TE. Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. J Med Artif Intell . (2022) 5:4. doi: 10.21037/jmai-21-25

29. Zhang J, Zhang ZM. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak . (2023) 23(1):7. doi: 10.1186/s12911-023-02103-9

30. Rojas JC, Teran M, Umscheid CA. Clinician trust in artificial intelligence: what is known and how trust can be facilitated. Crit Care Clin . (2023) 39(4):769–82. doi: 10.1016/j.ccc.2023.02.004

31. Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg . (2022) 9:862322. doi: 10.3389/fsurg.2022.862322

32. O'Sullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, et al. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robot . (2019) 15(1):e1968. doi: 10.1002/rcs.1968

33. Cestonaro C, Delicati A, Marcante B, Caenazzo L, Tozzo P. Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review. Front Med (Lausanne) . (2023) 10:1305756. doi: 10.3389/fmed.2023.1305756

Keywords: artificial intelligence, machine learning, deep learning, pediatric surgery, neonatal and pediatric care, ELSA, surgical innovation

Citation: Verhoeven R and Hulscher JBF (2024) Editorial: Artificial intelligence and machine learning in pediatric surgery. Front. Pediatr. 12:1404600. doi: 10.3389/fped.2024.1404600

Received: 21 March 2024; Accepted: 1 April 2024; Published: 9 April 2024.

Edited and Reviewed by: Simone Frediani , Bambino Gesù Children’s Hospital (IRCCS), Italy

© 2024 Verhoeven and Hulscher. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rosa Verhoeven [email protected]

This article is part of the Research Topic

Artificial Intelligence and Machine Learning in Pediatric Surgery

American Banker Interactive Workshop: Solving problems with AI

Search Cornell

Cornell University

Class Roster

Section menu.

  • Toggle Navigation
  • Summer 2024
  • Spring 2024
  • Winter 2024
  • Archived Rosters

Last Updated

  • Schedule of Classes - April 12, 2024 7:34PM EDT
  • Course Catalog - April 12, 2024 7:06PM EDT

CS 5700 Foundations of Artificial Intelligence

Course description.

Course information provided by the Courses of Study 2023-2024 . Courses of Study 2024-2025 is scheduled to publish mid-June.

Challenging introduction to the major subareas and current research directions in artificial intelligence. Topics include: knowledge representation, heuristic search, problem solving, natural-language processing, game-playing, logic and deduction, planning, and machine learning.

When Offered Fall, Spring.

View Enrollment Information

  Regular Academic Session.   Combined with: CS 3700

Credits and Grading Basis

3 Credits Opt NoAud (Letter or S/U grades (no audit))

Class Number & Section Details

 7362 CS 5700   LEC 001

Meeting Pattern

  • MW 2:55pm - 4:10pm To Be Assigned
  • Aug 26 - Dec 9, 2024

Instructors

To be determined. There are currently no textbooks/materials listed, or no textbooks/materials required, for this section. Additional information may be found on the syllabus provided by your professor.

For the most current information about textbooks, including the timing and options for purchase, see the Cornell Store .

Additional Information

Instruction Mode: In Person More information about CS courses can be found here: https://tdx.cornell.edu/TDClient/193/Portal/Home/ .

Or send this URL:

Available Syllabi

About the class roster.

The schedule of classes is maintained by the Office of the University Registrar . Current and future academic terms are updated daily . Additional detail on Cornell University's diverse academic programs and resources can be found in the Courses of Study . Visit The Cornell Store for textbook information .

Please contact [email protected] with questions or feedback.

If you have a disability and are having trouble accessing information on this website or need materials in an alternate format, contact [email protected] for assistance.

Cornell University ©2024

Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

UK Edition Change

  • UK Politics
  • News Videos
  • Paris 2024 Olympics
  • Rugby Union
  • Sport Videos
  • John Rentoul
  • Mary Dejevsky
  • Andrew Grice
  • Sean O’Grady
  • Photography
  • Theatre & Dance
  • Culture Videos
  • Food & Drink
  • Health & Families
  • Royal Family
  • Electric Vehicles
  • Car Insurance deals
  • Lifestyle Videos
  • UK Hotel Reviews
  • News & Advice
  • Simon Calder
  • Australia & New Zealand
  • South America
  • C. America & Caribbean
  • Middle East
  • Politics Explained
  • News Analysis
  • Today’s Edition
  • Home & Garden
  • Broadband deals
  • Fashion & Beauty
  • Travel & Outdoors
  • Sports & Fitness
  • Sustainable Living
  • Climate Videos
  • Solar Panels
  • Behind The Headlines
  • On The Ground
  • Decomplicated
  • You Ask The Questions
  • Binge Watch
  • Travel Smart
  • Watch on your TV
  • Crosswords & Puzzles
  • Most Commented
  • Newsletters
  • Ask Me Anything
  • Virtual Events
  • Betting Sites
  • Online Casinos
  • Wine Offers

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in Please refresh your browser to be logged in

Humane AI Pin: Much-hyped artificial intelligence device is not about to replace your smartphone, reviews say

Reviewers describe the $700 device as ‘a promising mess you don’t need yet’ and ‘the solution to none of technology’s problems’, article bookmarked.

Find your bookmarks in your Independent Premium section, under my profile

The Humane AI Pin clips to a user’s clothes and serves as a standalone AI-powered device

Sign up to our free weekly IndyTech newsletter delivered straight to your inbox

Sign up to our free indytech newsletter, thanks for signing up to the indytech email.

A new AI device that claims to be able to replace smartphones has been widely panned in early reviews.

The $700 Humane AI Pin , which launched in the US on Thursday, serves as a standalone artificial intelligence assistant that clips to a user’s clothes. It offers similar functionality to a smartphone, featuring a camera, speaker, microphone and touchpad, however it eschews a conventional screen for a projector that can turn a user’s hand into a display.

Its creators claim it is the “next leap in device design”, capable of spearheading a transition to a post-smartphone future that will allow people to reconnect with the world around them.

“It interacts with the world in the way that you interact with the world – hearing what you hear, seeing what you see,” Humane co-founder Imran Chaudhri said during a demonstration of the gadget last year, saying this allows it to “fade into the background of your life”.

Despite Humane’s claim that being screenless makes it “seamless”, the first judgements from reviewers suggest that it is not about to make smartphones obsolete.

Critics have called it slow, lacking features, and susceptible to overheating and shutting down. Reviewers also criticised the $24 -per-month subscription fee that customers need to pay on top of the initial $699.

The Washington Post described it as “a promising mess you don’t need yet”, while The Verge concluded that “the post-smartphone future isn’t here yet”.

The Verge editor-at-large David Pierce wrote: “There are too many basic things it can’t do, too many things it doesn’t do well enough, and too many things it does well but only sometimes that I’m hard-pressed to name a single thing it’s genuinely good at. None of this – not the hardware, not the software, not even GPT4 – is ready yet.”

Julian Chokkattu from Wired said the AI Pin offered nothing that made him want to use it over his smartphone.

“Whenever I went out with it, I found myself barely using it,” he wrote. “I’d ask it maybe three to four things, partly just to try a feature out. I’d then get disappointed with the results.

“I know the co-founders created it as a way to stay rooted in the real world and to avoid having a screen in front of your face all the time, but to achieve that goal, this thing needs to be 100 percent reliable. All the time.”

Humane claims the device will be improved through software updates that will fix early glitches, while future hardware improvements will likely improve the quality of the camera and add functionality for the screen projection.

“We have an ambitious roadmap with software improvements, new features [and] additional partnerships,” Bethany Bongiorno, CEO and co-founder of Humane, said on Thursday. “All of this will enable your AI Pin to become smarter and more powerful over time.”

It seems until then, as Engadget put it, “The AI Pin is the solution to none of technology’s problems”.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Subscribe to Independent Premium to bookmark this article

Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.

New to The Independent?

Or if you would prefer:

Want an ad-free experience?

Hi {{indy.fullName}}

  • My Independent Premium
  • Account details
  • Help centre

IMAGES

  1. AI Problem Solving

    describe problem solving in artificial intelligence

  2. Problem Solving Agents in Artificial Intelligence

    describe problem solving in artificial intelligence

  3. Problem Solving Techniques in Artificial Intelligence (AI)

    describe problem solving in artificial intelligence

  4. Problem Solving Methods In (AI ) # Artificial Intelligence Lecture 11

    describe problem solving in artificial intelligence

  5. Problem Formulation-Artificial Intelligence-Unit-1-Problem Solving

    describe problem solving in artificial intelligence

  6. What Is Artificial Intelligence AI? And How does it work

    describe problem solving in artificial intelligence

VIDEO

  1. Sanhedrin, Problem Solving, Artificial Intelligence: Rabbi Yosef Edery & Professor Avraham Ehrlich:

  2. Lecture 3: Problem Solving to Artificial Intelligence (Af-Soomaali)

  3. 2024 #Career Tip 9: #Resume action verbs to describe problem—solving. #shorts #jobsearch

  4. Introduction to Artificial Intelligence

  5. ThoughtForms Trailer 1

  6. Data Science Demystified: Unlocking Hidden Patterns and Insights!

COMMENTS

  1. Problem Solving in Artificial Intelligence

    The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem. We can also say that a problem-solving agent is a result-driven agent and always ...

  2. PDF Problem Solving and Search

    6.825 Techniques in Artificial Intelligence Problem Solving and Search Problem Solving • Agent knows world dynamics • World state is finite, small enough to enumerate • World is deterministic • Utility for a sequence of states is a sum over path The utility for sequences of states is a sum over the path of the utilities of the

  3. AI and the Art of Problem-Solving: From Intuition to Algorithms

    In the complex world of Artificial Intelligence (AI), problem-solving is a key aspect. AI's ability to solve difficult problems, sometimes even better than humans, is not just a technological accomplishment but also gives us a glimpse into the future of computing and cognitive science. This blog post explores the interesting field of problem ...

  4. Problem Solving in Artificial Intelligence

    The steps involved in solving a problem (by an agent based on Artificial Intelligence) are: 1. Define a problem. Whenever a problem arises, the agent must first define a problem to an extent so that a particular state space can be represented through it. Analyzing and defining the problem is a very important step because if the problem is ...

  5. What is Artificial Intelligence (AI)?

    Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. On its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention.

  6. PDF Cs 380: Artificial Intelligence Problem Solving

    Problem Formulation • Initial state: S 0 • Initial configuration of the problem (e.g. starting position in a maze) • Actions: A • The different ways in which the agent can change the state (e.g. moving to an adjacent position in the maze) • Goal condition: G • A function that determines whether a state reached by a given sequence of actions constitutes a solution to the problem or not.

  7. An Introduction to Problem-Solving using Search Algorithms for Beginners

    Problem Solving Techniques. In artificial intelligence, problems can be solved by using searching algorithms, evolutionary computations, knowledge representations, etc. In this article, I am going to discuss the various searching techniques that are used to solve a problem. In general, searching is referred to as finding information one needs.

  8. PDF AI Handbook

    A. Overview In Artificial Intelligence the terms problem solving and search refer to a large body of core ideas that deal with deduction, inference, planning, commonsense reasoning, theorem proving, and related processes. Applications ofthese general ideas are found inprograms for natural language understanding, information retrieval, automatic programming,robotics, scene analysis, game ...

  9. Artificial Intelligence: Principles and Techniques

    You will gain the confidence and skills to analyze and solve new AI problems you encounter in your career. Get a solid understanding of foundational artificial intelligence principles and techniques, such as machine learning, state-based models, variable-based models, and logic. Implement search algorithms to find the shortest paths, plan robot ...

  10. PDF Principles of Problem Solving in AI Systems

    1 3. Principles of Creative Problem Solving in AI Systems. 557. empirical computational exploration contribute to creating the imagination of the eficacy of AI in the area of creative problem solving. However, the critical issue of the possibility of developing self-adaptive learning by the creative systems has not been further discussed yet.

  11. What Is Artificial Intelligence? Definition, Uses, and Types

    Artificial intelligence (AI) refers to computer systems capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems. ... Although the term is commonly used to describe a range of different technologies in use today, many disagree on whether these actually constitute ...

  12. Problem Solving Techniques in AI

    Artificial intelligence (AI) problem-solving often involves investigating potential solutions to problems through reasoning techniques, making use of polynomial and differential equations, and carrying them out and use modelling frameworks. A same issue has a number of solutions, that are all accomplished using an unique algorithm.

  13. Artificial intelligence

    This is one of the hardest problems confronting AI. Problem solving. Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose.

  14. AI accelerates problem-solving in complex scenarios

    Researchers from MIT and ETZ Zurich have developed a new, data-driven machine-learning technique that speeds up software programs used to solve complex optimization problems that can have millions of potential solutions. Their approach could be applied to many complex logistical challenges, such as package routing, vaccine distribution, and power grid management.

  15. How to Define an AI Problem

    The first step in solving an AI/ML problem is to be able to describe and understand the problem in detail. Overview. Here is an overview of my tips for describing an AI/ML problem [1]: Give some description of your background and experience. Describe the problem, including the category of ML problem.

  16. How to Define an AI Problem

    Here is an overview of my tips for describing an AI/ML problem [1]: Give some description of your background and experience. Describe the problem, including the category of ML problem. Describe the dataset in detail and be willing to share your dataset (s). Describe any data preparation and feature engineering steps that you have done.

  17. Artificial intelligence (AI)

    Artificial intelligence, the ability of a computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. ... Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language. Learning. There are a number of different forms of ...

  18. The Intersection of Math and AI: A New Era in Problem-Solving

    Machine Learning: A New Era in Mathematical Problem Solving. Machine learning is a subfield of AI, or artificial intelligence, in which a computer program is trained on large datasets and learns to find new patterns and make predictions. The conference, the first put on by the new Richard N. Merkin Center for Pure and Applied Mathematics, will ...

  19. What is AI (artificial intelligence)?

    What is artificial general intelligence? The term "artificial general intelligence" (AGI) was coined to describe AI systems that possess capabilities comparable to those of a human. In theory, AGI could someday replicate human-like cognitive abilities including reasoning, problem-solving, perception, learning, and language comprehension.

  20. Three Challenges for AI-Assisted Decision-Making

    Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges.

  21. How Leaders Are Using AI As A Problem-Solving Tool

    Consequently, AI can be a valuable problem-solving tool for leaders across the private and public sectors, primarily through three methods. 1) Automation. One of AI's most beneficial ways to ...

  22. 23

    Summary. In this chapter we discuss the link between intelligence and problem-solving. To preview, we argue that the ability to solve problems is not just an aspect or feature of intelligence - it is the essence of intelligence. We briefly review evidence from psychometric research concerning the nature of individual differences in ...

  23. Exploring Problem Reduction in AI Techniques and Applications

    Problem reduction, a technique widely used in artificial intelligence, has found applications in various fields. This powerful approach simplifies complex problems by breaking them down into smaller, more manageable subproblems. By reducing the complexity of a problem, it becomes easier to analyze and solve. 1.

  24. How AI mathematicians might finally deliver human-level reasoning

    Artificial intelligence is taking on some of the hardest problems in pure maths, arguably demonstrating sophisticated reasoning and creativity - and a big step forward for AI ... solving complex ...

  25. OpenAI and Meta Close to Unleashing Super Smart AI

    The race to develop artificial intelligence (AI) that can mimic human reasoning and problem-solving skills is heating up, with tech giants OpenAI and Meta at the forefront. These companies are on ...

  26. Tech Companies Want to Build Artificial General Intelligence

    Artificial general intelligence, or AGI, would do much more than current AI technology. AGI would be as good as humans in many areas of human thinking. These include planning, problem-solving, and ...

  27. Frontiers

    Editorial on the Research Topic Artificial intelligence and machine learning in pediatric surgery. 1 Introduction. Cutting-edge technologies are leading to a profound transformation of healthcare (1, 2).Among the most promising advancements is the integration of artificial intelligence (AI) into the delicate and critical domain of pediatrics (3-5).We find ourselves at the threshold of a new ...

  28. American Banker Interactive Workshop: Solving problems with AI

    Payments executives and experts from American Banker and Arizent Research host an interactive session on how artificial intelligence is solving problems in the payments industry. ... Solving problems with AI . April 12, 2024 10:54 AM . 25:58. Facebook; Twitter; LinkedIn; Email; Event Archives.

  29. Class Roster

    Fall 2024 - CS 5700 - Challenging introduction to the major subareas and current research directions in artificial intelligence. Topics include: knowledge representation, heuristic search, problem solving, natural-language processing, game-playing, logic and deduction, planning, and machine learning.

  30. Humane AI Pin: Much-hyped artificial intelligence device is not about

    Reviewers describe the $700 device as 'a promising mess you don't need yet' and 'the solution to none of technology's problems' ... Much-hyped artificial intelligence device is not ...