• Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial
  • Resolution Completeness and clauses in Artificial Intelligence
  • Stochastic Games in Artificial Intelligence
  • Artificial Intelligence - Boon or Bane
  • Syntactically analysis in Artificial Intelligence
  • Implementation of Particle Swarm Optimization
  • Explaining the language in Natural Language
  • Statistical Machine Translation of Languages in Artificial Intelligence
  • Types of Human Intelligence
  • ACUMOS: A New Innovative Path for AI
  • Optimal Decision Making in Multiplayer Games
  • Propositional Logic based Agent
  • Breadth-first Search is a special case of Uniform-cost search
  • Prepositional Inference in Artificial Intelligence
  • GPT-3 : Next AI Revolution
  • Detecting Frauds with ML and AI
  • Face recognition using Artificial Intelligence
  • Propositional Logic Reduction
  • Prepositional Logic Inferences
  • Types of Environments in AI

Problem Solving in Artificial Intelligence

The reflex agent of AI directly maps states into action. Whenever these agents fail to operate in an environment where the state of mapping is too large and not easily performed by the agent, then the stated problem dissolves and sent to a problem-solving domain which breaks the large stored problem into the smaller storage area and resolves one by one. The final integrated action will be the desired outcomes.

On the basis of the problem and their working domain, different types of problem-solving agent defined and use at an atomic level without any internal state visible with a problem-solving algorithm. The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem.  

We can also say that a problem-solving agent is a result-driven agent and always focuses on satisfying the goals.

There are basically three types of problem in artificial intelligence:

1. Ignorable: In which solution steps can be ignored.

2. Recoverable: In which solution steps can be undone.

3. Irrecoverable: Solution steps cannot be undo.

Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and their activities. So we need a number of finite steps to solve a problem which makes human easy works.

These are the following steps which require to solve a problem :

  • Problem definition: Detailed specification of inputs and acceptable system solutions.
  • Problem analysis: Analyse the problem thoroughly.
  • Knowledge Representation: collect detailed information about the problem and define all possible techniques.
  • Problem-solving: Selection of best techniques.

Components to formulate the associated problem: 

  • Initial State: This state requires an initial state for the problem which starts the AI agent towards a specified goal. In this state new methods also initialize problem domain solving by a specific class.
  • Action: This stage of problem formulation works with function with a specific class taken from the initial state and all possible actions done in this stage.
  • Transition: This stage of problem formulation integrates the actual action done by the previous action stage and collects the final stage to forward it to their next stage.
  • Goal test: This stage determines that the specified goal achieved by the integrated transition model or not, whenever the goal achieves stop the action and forward into the next stage to determines the cost to achieve the goal.  
  • Path costing: This component of problem-solving numerical assigned what will be the cost to achieve the goal. It requires all hardware software and human working cost.

Please Login to comment...

author

  • Artificial Intelligence
  • How to Delete Whatsapp Business Account?
  • Discord vs Zoom: Select The Efficienct One for Virtual Meetings?
  • Otter AI vs Dragon Speech Recognition: Which is the best AI Transcription Tool?
  • Google Messages To Let You Send Multiple Photos
  • 30 OOPs Interview Questions and Answers (2024)

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Network Encyclopedia Logo

Introduction to Problem-Solving in AI

Last Edited

Welcome to this comprehensive introduction to problem-solving in AI. When one mentions Artificial Intelligence, it often conjures images of futuristic robots or advanced systems that mimic human-like characteristics. But the true essence of AI isn’t merely imitating human cognition; it’s about solving problems—problems that range from the mundane to the complex, from straightforward calculations to intricate data analysis.

As we steer further into this age of information and technology, understanding the problem-solving capabilities of AI becomes not just relevant but crucial for tech aficionados and industry experts alike.

Introduction to Problem-Solving in AI

What Does Problem-Solving Mean in AI?

In the most basic terms, problem-solving consists of finding feasible solutions to complicated issues. For human beings, this process is deeply rooted in critical thought, accumulated experience, and an occasional dash of intuition. The introduction to problem-solving in AI reveals that it involves a range of algorithms and methodologies designed to achieve specific objectives, predict outcomes, or automate particular tasks. Often, these operations are executed in environments where traditional human-driven methods are too slow, inefficient, or costly.

Goals and Objectives

Problem-solving in AI aims to achieve specific goals or satisfy certain constraints, using available resources and within a finite amount of time. These goals can be as simple as sorting a list of numbers or as complicated as diagnosing a medical condition. The algorithms used often depend on the problem at hand, with specific algorithms tailored for specific problems.

The Role of Data

Data is the lifeblood of AI problem-solving. Be it training data for a machine learning model or real-time data feeding into a neural network, the quality and quantity of data often determine the efficacy of the solution. AI algorithms sift through massive datasets, identify patterns, and make decisions, all in a fraction of the time it would take a human to perform the same tasks.

Types of Problems and AI Approaches

Problem-solving in AI can be categorized into several types, including but not limited to:

  • Optimization Problems: Finding the best solution from a set of possible solutions.
  • Classification Problems: Categorizing data into predefined classes.
  • Regression Problems: Predicting numerical values based on input data.
  • Planning Problems: Creating a sequence of actions to achieve a specific goal.
  • Natural Language Processing: Understanding and generating human language to perform tasks like translation, summarization, or sentiment analysis. ( see Large Language Models )
  • Reinforcement Learning Problems: Learning optimal sequences of actions in interactive environments to achieve specific objectives.
  • Scheduling Problems: Allocating resources efficiently to complete tasks within a set timeframe.

Each type of problem typically requires a specialized approach or algorithm. For instance, optimization problems might use algorithms like the Genetic Algorithm or Particle Swarm Optimization. Planning problems might utilize heuristic search methods, whereas classification tasks often employ machine learning models like Support Vector Machines or Decision Trees.

The Cross-Disciplinary Nature of AI Problem-Solving

The beauty of AI’s problem-solving capability lies in its adaptability and versatility. Techniques initially developed for one purpose can often be adapted for use in entirely different domains. Machine learning algorithms used in recommendation systems for e-commerce sites, for instance, can be modified to predict disease outbreaks or financial market shifts. This cross-disciplinary applicability makes AI an indispensable tool in today’s rapidly evolving technological landscape.

The Power of Heuristics

While traditional algorithms often provide exact solutions, many real-world problems are too complex for this approach. In these cases, heuristic methods, which offer “good enough” solutions, become invaluable. These methods make AI adaptable and agile, capable of responding to unique and evolving problems without requiring entirely new algorithms.

This introduction to problem-solving in AI serves as the launching pad for a deeper exploration of how this technology is radically reshaping our world. From methodologies to multidisciplinary applications, the subsequent chapters will offer an even more nuanced understanding of AI’s problem-solving prowess. So, fasten your seat belts as we delve into the remarkable and constantly evolving world of AI’s problem-solving capabilities.

Illustration of how AI enables computers to think like humans, interconnected applications and impact on modern life

Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.

On its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention. Digital assistants, GPS guidance, autonomous vehicles, and generative AI tools (like Open AI's Chat GPT) are just a few examples of AI in the daily news and our daily lives.

As a field of computer science, artificial intelligence encompasses (and is often mentioned together with) machine learning and deep learning . These disciplines involve the development of AI algorithms, modeled after the decision-making processes of the human brain, that can ‘learn’ from available data and make increasingly more accurate classifications or predictions over time.

Artificial intelligence has gone through many cycles of hype, but even to skeptics, the release of ChatGPT seems to mark a turning point. The last time generative AI loomed this large, the breakthroughs were in computer vision, but now the leap forward is in natural language processing (NLP). Today, generative AI can learn and synthesize not just human language but other data types including images, video, software code, and even molecular structures.

Applications for AI are growing every day. But as the hype around the use of AI tools in business takes off, conversations around ai ethics and responsible ai become critically important. For more on where IBM stands on these issues, please read  Building trust in AI .

Learn about barriers to AI adoptions, particularly lack of AI governance and risk management solutions.

Register for the guide on foundation models

Weak AI—also known as narrow AI or artificial narrow intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. "Narrow" might be a more apt descriptor for this type of AI as it is anything but weak: it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM watsonx™, and self-driving vehicles.

Strong AI is made up of artificial general intelligence (AGI) and artificial super intelligence (ASI). AGI, or general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would be self-aware with a consciousness that would have the ability to solve problems, learn, and plan for the future. ASI—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman and rogue computer assistant in  2001: A Space Odyssey.

Machine learning and deep learning are sub-disciplines of AI, and deep learning is a sub-discipline of machine learning.

Both machine learning and deep learning algorithms use neural networks to ‘learn’ from huge amounts of data. These neural networks are programmatic structures modeled after the decision-making processes of the human brain. They consist of layers of interconnected nodes that extract features from the data and make predictions about what the data represents.

Machine learning and deep learning differ in the types of neural networks they use, and the amount of human intervention involved. Classic machine learning algorithms use neural networks with an input layer, one or two ‘hidden’ layers, and an output layer. Typically, these algorithms are limited to supervised learning : the data needs to be structured or labeled by human experts to enable the algorithm to extract features from the data.

Deep learning algorithms use deep neural networks—networks composed of an input layer, three or more (but usually hundreds) of hidden layers, and an output layout. These multiple layers enable unsupervised learning : they automate extraction of features from large, unlabeled and unstructured data sets. Because it doesn’t require human intervention, deep learning essentially enables machine learning at scale.

Generative AI refers to deep-learning models that can take raw data—say, all of Wikipedia or the collected works of Rembrandt—and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data.

Generative models have been used for years in statistics to analyze numerical data. The rise of deep learning, however, made it possible to extend them to images, speech, and other complex data types. Among the first class of AI models to achieve this cross-over feat were variational autoencoders, or VAEs, introduced in 2013. VAEs were the first deep-learning models to be widely used for generating realistic images and speech.

“VAEs opened the floodgates to deep generative modeling by making models easier to scale,” said Akash Srivastava , an expert on generative AI at the MIT-IBM Watson AI Lab. “Much of what we think of today as generative AI started here.”

Early examples of models, including GPT-3, BERT, or DALL-E 2, have shown what’s possible. In the future, models will be trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning. Systems that execute specific tasks in a single domain are giving way to broad AI systems that learn more generally and work across domains and problems. Foundation models, trained on large, unlabeled datasets and fine-tuned for an array of applications, are driving this shift.

As to the future of AI, when it comes to generative AI, it is predicted that foundation models will dramatically accelerate AI adoption in enterprise. Reducing labeling requirements will make it much easier for businesses to dive in, and the highly accurate, efficient AI-driven automation they enable will mean that far more companies will be able to deploy AI in a wider range of mission-critical situations. For IBM, the hope is that the computing power of foundation models can eventually be brought to every enterprise in a frictionless hybrid-cloud environment.

Explore foundation models in watsonx.ai

There are numerous, real-world applications for AI systems today. Below are some of the most common use cases:

Also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, speech recognition uses NLP to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search—Siri, for example—or provide more accessibility around texting in English or many widely-used languages.  See how Don Johnston used IBM Watson Text to Speech to improve accessibility in the classroom with our case study .

Online  virtual agents  and chatbots are replacing human agents along the customer journey. They answer frequently asked questions (FAQ) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents , messaging apps, such as Slack and Facebook Messenger, and tasks usually done by virtual assistants and  voice assistants .  See how Autodesk Inc. used IBM watsonx Assistant to speed up customer response times by 99% with our case study .

This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.  See how ProMare used IBM Maximo to set a new course for ocean research with our case study .

Adaptive robotics act on Internet of Things (IoT) device information, and structured and unstructured data to make autonomous decisions. NLP tools can understand human speech and react to what they are being told. Predictive analytics are applied to demand responsiveness, inventory and network optimization, preventative maintenance and digital manufacturing. Search and pattern recognition algorithms—which are no longer just predictive, but hierarchical—analyze real-time data, helping supply chains to react to machine-generated, augmented intelligence, while providing instant visibility and transparency. See how Hendrickson used IBM Sterling to fuel real-time transactions with our case study .

The weather models broadcasters rely on to make accurate forecasts consist of complex algorithms run on supercomputers. Machine-learning techniques enhance these models by making them more applicable and precise. See how Emnotion used IBM Cloud to empower weather-sensitive enterprises to make more proactive, data-driven decisions with our case study .

AI models can comb through large amounts of data and discover atypical data points within a dataset. These anomalies can raise awareness around faulty equipment, human error, or breaches in security.  See how Netox used IBM QRadar to protect digital businesses from cyberthreats with our case study .

The idea of "a machine that thinks" dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

  • 1950:  Alan Turing publishes Computing Machinery and Intelligence  (link resides outside ibm.com) .  In this paper, Turing—famous for breaking the German ENIGMA code during WWII and often referred to as the "father of computer science"— asks the following question: "Can machines think?"  From there, he offers a test, now famously known as the "Turing Test," where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since it was published, it remains an important part of the history of AI, as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.
  • 1956:  John McCarthy coins the term "artificial intelligence" at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.
  • 1967:  Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that "learned" though trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled  Perceptrons , which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
  • 1980s:  Neural networks which use a backpropagation algorithm to train itself become widely used in AI applications.
  • 1995 : Stuart Russell and Peter Norvig publish  Artificial Intelligence: A Modern Approach  (link resides outside ibm.com), which becomes one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting.
  • 1997:  IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
  • 2004 : John McCarthy writes a paper, What Is Artificial Intelligence?  (link resides outside ibm.com), and proposes an often-cited definition of AI.
  • 2011:  IBM Watson beats champions Ken Jennings and Brad Rutter at  Jeopardy!
  • 2015:  Baidu's Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
  • 2016:  DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.
  • 2023 : A rise in large language models, or LLMs, such as ChatGPT, create an enormous change in performance of AI and its potential to drive enterprise value. With these new generative AI practices, deep-learning models can be pre-trained on vast amounts of raw, unlabeled data.

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

AI is changing the game for cybersecurity, analyzing massive quantities of risk data to speed response times and augment under-resourced security operations.

Learn how to use the model selection framework to select the foundation model for your business needs.

Access our full catalog of over 100 online courses by purchasing an individual or multi-user digital learning subscription today, enabling you to expand your skills across a range of our products at one low price.

IBM watsonx Assistant recognized as a Customers' Choice in the 2023 Gartner Peer Insights Voice of the Customer report for Enterprise Conversational AI platforms

Discover how machine learning can predict demand and cut costs.

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Teaching Assistant

Time and place, general course information, lecture slides, written homeworks, programming projects, java course code, homeworks and tests from previous year, miscellaneous links, popular books related to ai.

Browse Course Material

Course info.

  • Patrick Henry Winston

Departments

  • Electrical Engineering and Computer Science

As Taught In

  • Algorithms and Data Structures
  • Artificial Intelligence
  • Theory of Computation

Learning Resource Types

Course description.

Graphic of seven figures in an evolutionary arc, starting with a monkey, with a figure standing upright in the middle, ending with a person hunched over at a computer.

You are leaving MIT OpenCourseWare

What is AI?

3D Robotics hand

Humans and machines: a match made in productivity  heaven. Our species wouldn’t have gotten very far without our mechanized workhorses. From the wheel that revolutionized agriculture to the screw that held together increasingly complex construction projects to the robot-enabled assembly lines of today, machines have made life as we know it possible. And yet, despite their seemingly endless utility, humans have long feared machines—more specifically, the possibility that machines might someday acquire human intelligence and strike out on their own.

Get to know and directly engage with senior McKinsey experts on AI

Sven Blumberg is a senior partner in McKinsey’s Düsseldorf office; Michael Chui is a partner at the McKinsey Global Institute and is based in the Bay Area office, where Lareina Yee is a senior partner; Kia Javanmardian is a senior partner in the Chicago office, where Alex Singla , the global leader of QuantumBlack, AI by McKinsey, is also a senior partner; Kate Smaje and Alex Sukharevsky are senior partners in the London office.

But we tend to view the possibility of sentient machines with fascination as well as fear. This curiosity has helped turn science fiction into actual science. Twentieth-century theoreticians, like computer scientist and mathematician Alan Turing, envisioned a future where machines could perform functions faster than humans. The work of Turing and others soon made this a reality. Personal calculators became widely available in the 1970s, and by 2016, the US census showed that 89 percent of American households had a computer. Machines— smart machines at that—are now just an ordinary part of our lives and culture.

Those smart machines are getting faster and more complex. Some computers have now crossed the exascale  threshold, meaning that they can perform as many calculations in a single second as an individual could in 31,688,765,000 years. But it’s not just about computation. Computers and other devices are now acquiring skills  and perception that have previously been our sole purview.

AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and even exercising creativity. You’ve probably interacted with AI even if you didn’t realize it—voice assistants like Siri and Alexa are founded on AI technology, as are some customer service chatbots that pop up to help you navigate websites.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

Applied AI —simply, artificial intelligence applied to real-world problems—has serious implications for the business world. By using artificial intelligence, companies have the potential to make business more efficient and profitable. But ultimately, the value of artificial intelligence isn’t in the systems themselves but in how companies use those systems to assist humans—and their ability to explain  to shareholders and the public what those systems do—in a way that builds and earns trust.

For more about AI, and how to apply it in business, read on.

Learn more about McKinsey’s Digital Practice .

What is machine learning?

Machine learning is a form of artificial intelligence based on algorithms that are trained on data. These algorithms can detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve their efficacy over time. The volume and complexity of data that is now being generated, too vast for humans to reasonably reckon with, has increased the potential of machine learning, as well as the need for it. In the years since its widespread deployment, which began in the 1970s, machine learning has had impact in a number of industries, including achievements in medical-imaging analysis  and high-resolution weather forecasting.

The volume and complexity of data that is now being generated, too vast for humans to reasonably reckon with, has increased the potential of machine learning, as well as the need for it.

What is deep learning?

Deep learning is a type of machine learning that can process a wider range of data resources (images, for instance, in addition to text), requires even less human intervention, and can often produce more accurate results than traditional machine learning. Deep learning uses neural networks—based on the ways neurons interact in the human brain —to ingest data and process it through multiple iterations that learn increasingly complex features of the data. The neural network can then make determinations about the data, learn whether a determination is correct, and use what it has learned to make determinations about new data. For example, once it “learns” what an object looks like, it can recognize the object in a new image.

Here are three types of artificial neural networks used in machine learning:

Feed-forward neural networks

In this simple neural network, first proposed in 1958, information moves in only one direction: forward from the model’s input layer to its output layer, without ever traveling backward to be reanalyzed by the model. That means you can feed, or input, data into the model, then “train” the model to predict something about different data sets. As just one example, feed-forward neural networks are used in banking, among other industries, to detect fraudulent financial transactions.

Here’s how it works: first, you train a model to predict whether a transaction is fraudulent based on a data set you’ve used to manually label transactions as fraudulent or not. Then you can use the model to predict whether new, incoming transactions are fraudulent so you can flag them for closer study or block them outright.

Convolutional neural networks (CNNs)

CNNs are a type of feed-forward neural network modeled on the makeup of the animal visual cortex, the part of the brain that processes images. As such, CNNs are well suited to perceptual tasks, like being able to identify bird or plant species based on photographs. Business use cases  include diagnosing diseases from medical scans, or detecting a company logo in social media to manage a brand’s reputation or to identify potential joint marketing opportunities.

Here’s how CNNs work:

  • First, the CNN receives an image—for example, of the letter “A”—that it processes as a collection of pixels.
  • In the hidden layers, the CNN identifies unique features—for example, the individual lines that make up “A.”
  • The CNN can now classify a different image as the letter “A” if it finds that the image has the unique features previously identified as making up the letter.

Recurrent neural networks (RNNs)

RNNs are artificial neural networks whose connections include loops, meaning the model both moves data forward and loops it backward to run again through previous layers. RNNs are helpful for predicting a sentiment or an ending of a sequence, like a large sample of text, speech, or images. They can do this because each individual input is fed into the model by itself as well as in combination with the preceding input.

Continuing with the banking example, RNNs can help detect fraudulent financial transactions just as feed-forward neural networks can, but in a more complex way. Whereas feed-forward neural networks can help predict whether one individual transaction is likely to be fraudulent, recurrent neural networks can “learn” from the financial behavior of an individual—such as a sequence of transactions like a credit card history—and measure each transaction against the person’s record as a whole. It can do this in addition to using the general learnings of the feed-forward neural-network model.

Read more about deep learning and about neural networks and their use cases.

Which sectors can benefit from machine learning and deep learning?

McKinsey collated more than 400 use cases of machine and deep learning across 19 industries and nine business functions. Nearly all industries can benefit  from machine and deep learning. Here are a few examples of use cases that cut across several sectors:

Predictive maintenance

Predictive maintenance is an important part of any industry or business relying on equipment. Rather than waiting until a piece of equipment breaks down, companies can use predictive maintenance to project when maintenance will be needed, thereby preventing downtime and reducing operating costs. Machine learning and deep learning have the capacity to analyze large amounts of multifaceted data, which can increase the precision of predictive maintenance. For example, AI practitioners can layer in data from new inputs, like audio and image data, which can add nuance to a neural network’s analysis.

Logistics optimization

Using AI to optimize logistics  can reduce costs through real-time forecasts and behavioral coaching. For example, AI can optimize routing of delivery traffic, improving fuel efficiency and reducing delivery times.

Customer service

AI techniques in call centers can enable a more seamless experience for customers and more efficient processing. The technology goes beyond understanding a caller’s words: deep-learning analysis of audio can assess a customer’s tone. If a caller is getting upset, the system can reroute to a human operator or manager.

What is generative AI?

Generative AI is an AI model that generates content in response to a prompt. It’s clear that generative-AI tools like ChatGPT and DALL-E (a tool for making AI-generated art) have the potential to change how a range of jobs are performed. The full scope of that impact, though, is still unknown—as are the risks. But there are some questions we can answer—like how generative-AI models are built, what kinds of problems they are best suited to solve, and how they fit into the broader category of AI and machine learning.

Read our McKinsey Explainer on  generative AI , and learn more about QuantumBlack, AI by McKinsey .

How can businesses put generative AI to use?

You’ve probably seen that generative-AI tools like ChatGPT can generate endless hours of entertainment. The opportunity is clear for businesses as well. Generative-AI tools can produce a wide variety of credible writing in seconds, then respond to a user’s critiques to make the writing more fit for purpose. This has implications for a broad range of industries, from IT and software organizations that can benefit from the instantaneous code generated by AI models to organizations in need of marketing copy. In short, any organization that needs to produce drafts of clearly written materials potentially stands to benefit. Organizations can also use generative AI to create more technical materials, such as higher-resolution versions of medical images. And with the time and resources saved, organizations can pursue new business opportunities and the chance to create more value.

But developing a proprietary generative-AI model is so resource intensive that it is out of reach for all but the biggest and best-resourced companies. To put generative AI to work, companies can either use generative-AI solutions out of the box or fine-tune them to perform a specific task. If you need to prepare slides according to a specific style, for example, you could ask the model to “learn” how headlines are normally written based on the data in the slides, then feed it slide data and ask it to write appropriate headlines.

Generative AI is not without its risks. Generative-AI models will confidently produce inaccurate, plagiarized, or biased results, without any indication that its outputs may be problematic. That’s because the models have been trained on the internet, which is hardly a universally reliable source. Leaders should be aware of these risks before turning to generative AI as a business solution. For more on the risks of generative AI, and how businesses can mitigate them, see the section below called “What are the limitations of AI models, and how can they be overcome?”

What are some specific business use cases for generative AI?

Case study: vistra corp. and the martin lake power plant.

Vistra is a large power producer in the United States, operating plants in 12 states with a capacity to power nearly 20 million homes. Vistra has committed to achieving net-zero emissions by 2050. In support of this goal, as well as to improve overall efficiency, QuantumBlack, AI by McKinsey worked with Vistra to build and deploy an AI-powered heat rate optimizer (HRO).

“Heat rate” is a measure of the thermal efficiency of the plant; in other words, it’s the amount of fuel required to produce each unit of electricity. To reach the optimal heat rate, plant operators continuously monitor and tune hundreds of variables, such as steam temperatures, pressures, oxygen levels, and fan speeds.

Vistra and a McKinsey team, including data scientists and machine-learning engineers, built a multilayered neural-network model. The model combed through two years’ worth of data at the plant and learned which combination of factors would optimize the algorithm and attain the most efficient heat rate at any point in time. When the models were accurate to 99 percent or higher and run through a rigorous set of real-world tests, the team converted them into an AI-powered engine that generates recommendations every 30 minutes for operators to improve the plant’s heat-rate efficiency. One seasoned operations manager at the company’s plant in Odessa, Texas, said, “There are things that took me 20 years to learn about these power plants. This model learned them in an afternoon.”

Overall, the AI-powered HRO helped Vistra achieve the following metrics:

  • approximately 1.6 million tons of carbon abated annually
  • 67 power generators optimized
  • $60 million saved in about a year

Read more about the Vistra story .

Generative-AI models are in the very early days of scaling, but we’ve started to see the first batch of applications  across functions:

  • Marketing and sales. Generative AI can craft personalized marketing, social-media, and technical-sales content, including text, images, and video.
  • Operations. AI models can generate task lists for efficient execution of a specific activity.
  • IT/engineering. Generative AI can write, document, and review code.
  • Risk and legal. AI models can answer complex questions, based on vast amounts of legal documentation, and draft and review annual reports.
  • R&D. Generative AI can help accelerate drug discovery through better understanding of diseases and discovery of chemical structures.
While generative AI on its own has a great deal of potential, it’s likely to be most powerful in combination with humans, who can help it achieve faster and better work.

Learn more about McKinsey’s   Digital Practice .

How is the use of AI expanding?

AI is a big story for all kinds of businesses, but some companies are clearly moving ahead of the pack . McKinsey’s state of AI in 2022 survey showed that adoption of AI models has more than doubled since 2017—and investment has increased apace. What’s more, the specific areas in which companies see value from AI have evolved, from manufacturing and risk to these:

  • marketing and sales
  • product and service development
  • strategy and corporate finance

And one set of companies continues to pull ahead of its competitors, by making larger investments in AI, leveling up its practices to scale faster, and hiring and upskilling the best AI talent. More specifically, this group of leaders is more likely to link AI strategy to business outcomes and “ industrialize ” AI operations by designing modular data architecture that can quickly accommodate new applications.

What are the limitations of AI models, and how can they be overcome?

Since they are so new, we have yet to see the long-tail effect of AI models. This means there are some inherent risks  involved in using them—some known and some unknown.

The outputs AI models produce may often sound extremely convincing. This is by design. But sometimes the information they generate is just plain wrong. Worse, sometimes it’s biased (because it’s built on the gender, racial, and myriad other biases of the internet and society more generally) and can even be manipulated to enable unethical or criminal activity. For example, ChatGPT won’t give you instructions on how to hotwire a car, but if you tell it you need to hotwire a car to save a child, the algorithm  will instantly comply . Organizations that rely on generative-AI models should reckon with reputational and legal risks involved in unintentionally publishing biased, offensive, or copyrighted content.

These risks can be mitigated, however, in a few ways. For one, it’s crucial to carefully select the initial data used to train these models to avoid including toxic or biased content. Next, rather than deploying an off-the-shelf generative-AI model, organizations could consider using smaller, specialized models. Organizations with more resources could also customize a general model based on their own data to fit their needs and minimize biases. Organizations should also keep a human in the loop (that is, make sure a real human checks the output of a generative-AI model before it is published or used) and avoid using generative-AI models for critical decisions, such as those involving significant resources or human welfare.

How can organizations scale up their AI efforts from ad hoc projects to full integration?

Most organizations are dipping a toe into the AI pool—not cannonballing. Slow progress toward widespread adoption is likely due to cultural and organizational barriers. But leaders who effectively break down these barriers will be best placed to capture the opportunity of the AI era. And—crucially—companies that are not making the most of AI are being overtaken by those that are, in industries such as auto manufacturing and financial services.

To scale up AI, organizations can make three major shifts :

Move from siloed work to interdisciplinary collaboration.

AI projects shouldn’t be limited to discrete pockets of organizations. Rather, AI is most effective when it’s being used by different teams with a range of varied talents to help ensure that AI addresses broad business priorities.

Empower frontline data-based decision making .

AI has the potential to enable faster, better decisions at all levels of an organization. To put this into practice, employees must be able to trust what the algorithm suggests and feel empowered to act accordingly.

Adopt and bolster an agile  mindset.

The agile test-and-learn mindset can help employees view errors as inspiration, allaying the fear of failure and speeding up development.

Learn more about McKinsey’s Digital Practice , and check out AI-related job opportunities if you’re interested in working at McKinsey.

Articles referenced:

  • “ What is agile? ,” March 27, 2023
  • “ What is generative AI? ,” January 19, 2023
  • “ Tech highlights from 2022—in eight charts ,” December 22, 2022
  • “ Generative AI is here: How tools like ChatGPT could change your business ,” December 20, 2022, Michael Chui , Roger Roberts , and Lareina Yee  
  • “ The state of AI in 2022—and a half decade in review ,” December 6, 2022, Michael Chui , Bryce Hall , Helen Mayhew , Alex Singla , and Alex Sukharevsky  
  • “ Why businesses need explainable AI—and how to deliver it ,” September 29, 2022, Liz Grennan , Andreas Kremer, Alex Singla , and Peter Zipparo
  • “ Why digital trust truly matters ,” September 12, 2022, Jim Boehm , Liz Grennan , Alex Singla , and Kate Smaje
  • “ McKinsey Technology Trends Outlook 2022 ,” August 24, 2022, Michael Chui , Roger Roberts , and Lareina Yee  
  • “ An AI power play: Fueling the next wave of innovation in the energy sector ,” May 12, 2022, Barry Boswell, Sean Buckley, Ben Elliott, Matias Melero, and Micah Smith  
  • “ Scaling AI like a tech native: The CEO’s role ,” October 13, 2021, Jacomo Corbo, David Harvey, Nicolas Hohn, Kia Javanmardian, and Nayur Khan
  • “ Winning with AI is a state of mind ,” April 30, 2021, Thomas Meakin , Jeremy Palmer, Valentina Sartori , and Jamie Vickers
  • “ A smarter way to digitize maintenance and reliability ,” April 23, 2021, Guillaume Decaix , Matthew Gentzel , Andy Luse, Patrick Neise , and Joel Thibert
  • “ Breaking through data-architecture gridlock to scale AI ,” January 26, 2021, Sven Blumberg , Jorge Machado, Henning Soller , and Asin Tavakoli
  • “ An executive’s guide to AI ,” November 17, 2020, Michael Chui , Brian McCarthy, and Vishnu Kamalnath
  • “ Executive’s guide to developing AI at scale ,” October 28, 2020, Nayur Khan, Brian McCarthy, and Adi Pradhan
  • “ The analytics academy: Bridging the gap between human and artificial intelligence ,” McKinsey Quarterly , September 25, 2019, Solly Brown, Darshit Gandhi, Louise Herring , and Ankur Puri
  • “ Notes from the AI frontier: Applications and value of deep learning ,” April 17, 2018, Michael Chui , James Manyika , Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra

3D Robotics hand

Want to know more about AI?

Related articles.

Abstract image of binary big data wave with information vertical line dots.

Generative AI is here: How tools like ChatGPT could change your business

Close up of knotted light trails

Deep learning in product design

digital lines stock photo

The state of AI in 2022—and a half decade in review

Book cover

Logic-Based Artificial Intelligence pp 3–33 Cite as

Introduction to Logic-Based Artificial Intelligence

  • Jack Minker 2  

733 Accesses

2 Citations

Part of the book series: The Springer International Series in Engineering and Computer Science ((SECS,volume 597))

In this chapter I provide a brief introduction to the field of Logic-Based Artificial Intelligence (LBAI) . I then discuss contributions to LBAI contained in the chapters and some of the highlights that took place at the Workshop on LBAI from which the papers are drawn. The areas of LBAI represented in the book are: commonsense reasoning; knowledge representation; nonmonotonic reasoning; abductive and inductive reasoning; logic, probability and decision making; logic for causation and actions; planning and problem solving; logic, planning and high-level robotics; logic for agents and actions; theory of beliefs; logic and language; computational logic; system implementations; and logic applications to mechanical checking and data integration.

  • Actions and agents
  • abductive reasoning
  • commonsense reasoning
  • computational logic
  • inductive reasoning
  • knowledge base system implementations
  • knowledge representation
  • logic and causation in planning
  • logic and data integration
  • logic applications to mechanical checking
  • planning and high-level robotics
  • natural language and logic
  • nonmonotonic reasoning
  • planning and problem solving
  • possibilistic logic

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Unable to display preview.  Download preview PDF.

Allen, J. (1984). Towards a general theory of action and time. Artificial Intelligence , 23:123–154.

Article   MATH   Google Scholar  

Apt, K., Blair, H., and Walker, A. (1988). Towards a theory of declarative knowledge. In Minker, J., editor, Foundations of Deductive Databases and Logic Programming , pages 89–148. Morgan Kaufmann Pub., Los Altos, CA.

Google Scholar  

Apt, K. R. and van Emden (1982) Contributions to the theory of logic programming J. ACM , 29(3):841–862.

Bacchus, F., Grove, A., Halpern, J., and Koller, D. (1993). Statistical foundations for default reasoning. Proc. IJCAI-93 , pages 563–569.

Bacchus, F. (1990) Representing and Reasoning with Probabilistic Knowledge . MIT Press.

Bacchus, F. and Kabanza, F. (2000). Using temporal logics to express search control knowledge for planning. Artificial Intelligence , 16:123–191.

Article   MathSciNet   Google Scholar  

Baral, C. and Gelfond, M. (1994). Logic programming and knowledge representation. Journal of Logic Programming , 19/20:73–148.

Boutilier, C., Reiter, R., Soutchanski, M., and Thrun, S. (2000). Decision-theoretic, high level agent programming in the situation calculus. In Proc. Amer. Assoc. for Artificial Intelligence — 2000 .

Boyer, R. S. and Moore, J. S. (1979). A Computational Logic . Academic Press.

Boyer, R. S. and Moore, J. S. (1997). A Computational Logic Handbook, Second Edition . Academic Press.

Brass, S., Dix, J., and Przymusinski, T. C. (1996). Super logic programs. Knowledge Representation , pages 529–540.

Brooks, R. A. (1991). Intelligence without reason. pages 569–595. Morgan Kaufmann.

Cadoli, M. and Lenzerini, M. (1991). The complexity of closed world reasoning and circumscription. Knowledge Representation , pages 550–555.

Cadoli, M. and Schaerf, M. (1992). A survey on complexity results for nonmonotonic logics. Technical report, University di Roma “La Sapienza”, Dipartiment o di Informatica e sistemistica, Roma, Italy.

Chandra, A. and Harel, D. (1985). Horn clause queries and generalizations. Journal of Logic Programming , 2(1):1–15.

Article   MathSciNet   MATH   Google Scholar  

Chang, C. L. and Lee, R. C. T. (1973). Symbolic Logic and Mechanical Theorem Proving . Academic Press, New York.

MATH   Google Scholar  

Cholewiński, P., Marek, V. W. and Mikitiuk, A., and Truszczyński, M. (1999). Computing with default logic. Artificial Intelligence , 112.

Cholewiński, P., Marek, W., and Truszczyński, M. (1996). Default reasoning system deres. In Proceedings of KR-96 , pages 518–528, San Francisco, California. Morgan Kaufmann.

Clark, K. L. (1978). Negation as Failure. In Gallaire, H. and Minker, J., editors, Logic and Data Bases , pages 293–322. Plenum Press, New York.

Chapter   Google Scholar  

Colmerauer, A. (1985). Prolog in 10 figures. Communications of the ACM , 28(12):1296–1310.

Colmerauer, A., Kanoui, H., Pasero, R., and Roussel, P. (1973). Un systeme de communication homme-machine en francais. TR, Groupe de Intelligence Artificielle Univ. de Aix-Marseille II, Marseille.

Dantsin, E., Eiter, T., Gottlob, G., and Voronkov, A. (1997). Complexity and expressive power of logic programming. In Proc. of 12 th annual IEEE Conference on Computational Complexity , pages 82–101.

Davis, E. (1998). The naive physics perplex. AI Magazine , 19(14):51–79.

Davis, E. (1999). Guide to axiomatizing domains in first-order logic. Electronic Newsletter on Reasoning About Actions and Change .

Davis, M. and Putnam, H. (1960). A computing procedure for quantification theory. J. ACM , 7:201–215.

Dimopoulos, Y. and Kakas, A. (1995). Abduction and inductive learning. In De Raedt, L., editor, Proc. 5 th Inductive Logic Programming Workshop (ILP95) , pages 25–28, Leuven, Belgium. KU Leuven.

Eiter, T., Leone, N., Mateis, C., Pfeifer, G., and Scarcello, F. (1997). A deductive system for nonmonotonic reasoning. In Dix, J., Furbach, U., and Nerode, A., editors, Proc. 4 th Int’l. Conf. on Logic Programming and Nonmonotonic Reasoning , number 1265 in Lecture Notes in AI. Springer.

Fikes, R. and Nilsson, N. (1971). STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving. Artificial Intelligence , 5(2): 189–208.

Article   Google Scholar  

Fikes, R. and Nilsson, N. (1993). STRIPS, a retrospective. Journal of Artificial Intelligence , 59(1/2):227–232.

Fitting, M. C. (2000). Fixpoint semantics for logic programming — a survey. Theoretical Computer Science . To appear.

Gallaire, H. and Minker, J., editors (1978). Logic and Databases . Plenum Press, New York.

Gallaire, H., Minker, J., and Nicolas, J.-M. (1984). Logic and databases: A deductive approach. ACM Computing Surveys , 16(2):153–185.

Geffner, H. (1990). Causal theories for nonmonotonic reasoning. In Proc. AAAI-90 , pages 524–530. AAAI Press.

Gelfond, M. and Lifschitz, V. (1988). The stable model semantics for logic programming. In Kowalski, R. and Bowen, K., editors, Logic Programming: Proc . 5 th Int’l Conf. and Symp. , pages 1070–1080.

Gelfond, M. and Lifschitz, V. (1990). Logic programs with classical negation. In Warren, D. and Szeredi, P., editors, Proc. 7 th Int’l Conf. on Logic Programming , pages 579–597, Jerusalem, Israel. MIT Press.

Gelfond, M. and Lifschitz, V. (1992). Representing actions in extended logic programming. In Apt, K., editor, Proc. Joint Int’l Conf. and Symp. on Logic Programming , pages 559–573.

Gelfond, M. and Lifschitz, V. (1993). Representing actions and change by logic programs. Journal of Logic Programming , 17(2,3,4):301–323.

Gelfond, M. and Lifschitz, V. (1998). Action languages. Electronic Transactions on AI , 3. ( http://www.ep.liu.se/ea/cis/1998/016 )

Gelfond, M., Lifschitz, V., and Rabinov, A. (1991). What are the limitations of the situation calculus? In Boyer, R., editor, Automated Reasoning: Essays in Honor of Woody Bledsoe , pages 167–179. Kluwer.

Ginsberg, M. and Smith, D. (1988a). Reasoning about action I: a possible world approach. Artificial Intelligence , 35:165–195.

Ginsberg, M. and Smith, D. (1988b). Reasoning about action II: the qualification problem. Artificial Intelligence , 35:311–342.

Giunchiglia, E. and Lifschitz, V. (1998). An action language based on causal explanation: Prelim. Rpt. In Proc. AAAI-98 , pages 623–630. AAAI Press.

Green, C. (1969). Theorem proving by resolution as a basis for question — answering systems. In Michie, B. M. D., editor, Machine Intelligence 4 , pages 183–205. Edinburgh University Press, New York.

Green, C. and Raphael, B. (1968a). Research in intelligent question answering systems. Proc. ACM 23rd National Conf. , pages 169–181.

Green, C. and Raphael, B. (1968b). The use of theorem-proving techniques in question-answering systems. Proc. ACM 23rd National Conf. .

Haas, A. (1987). The case for domain-specific frame axioms. In Brown, F. M., editor, The Frame Problem in Artificial Intelligence, (Proc. 1987 Workshop) .

Hanks, S. and McDermott, D. (1987). Nonmonotonic logic and temporal projection. Artificial Intelligence , 33(3):379–412.

Hayes, P. (1973a). Computation and deduction. In Proceedings of the 2 nd Symp . on Mathematical Foundations of Computer Science , pages 107–113, Czechoslovakia: Czechoslovakian Academy of Sciences.

Hayes, P. (1985a). Naive physics i: ontology for liquids. In Hobbs, J. and Moore, R., editors, Formal Theories of the Commonsense World , chapter 3, pages 71–107. Ablex, Norwood, New Jersey.

Hayes, P. (1985b). The second naive physics manifesto. In Hobbs, J. and Moore, R., editors, Formal Theories of the Commonsense World , chapter 1, pages 1–36. Ablex, Norwood, New Jersey.

Hayes, P. J. (1973b). The frame problem and related problems in artificial intelligence. Artificial and Human Thinking , pages 45–59.

Jaffar, J. and Maher, M. (1994). Constraint logic programming: a survey. Journal of Logic Programming , 19–20:503–581.

Jenkin, M., Lespérance, Y., Levesque, H., Lin, F., Lloyd, J., Marcu, D., Reiter, R., Scherl, R., and Tam, K. (1997). A logical approach to portable high-level robot programming. In Proc . 10 th Australian Joint Conf. on Artificial Intelligence (AI’97) , Perth, Australia.

Jennings, N. R., Sycara, K., and Wooldridge, M. (1998). A roadmap of agent research and development. Autonomous Agents and Multi-Agent Systems , 1:7–38.

Kakas, A. C., Kowalski, R. A., and Toni, F. (1993). Abductive logic programming. Journal of Logic and Computation , 6(2):719–770.

MathSciNet   Google Scholar  

Kaufmann, M. and Moore, J. S. (1997). An industrial strength theorem prover for a logic based on Common Lisp. IEEE Transactions on Software Engineering , 23(4):203–213.

Kaufmann, M. and Moore, J. S. (1999). The ACL2 user’s manual . http://www.cs.utexas.edu/users/moore/acl2/acl2-doc.html .

Kautz, H. and Selman, B. (1999b). Unifying sat-based and graph-based planning. In Proc. of IJCAI 99 , pages 318–325.

Kowalski, R. (1974). Predicate logic as a programming language. Proc. IFIP 4 , pages 569–574.

Lespérance, Y., Levesque, H., Lin, F., Marcu, D., Reiter, R., and Scherl, R. (1994). A logical approach to high-level robot programming — a progress report. In Control of the Physical World by Intelligent Systems, Working Notes of the 1994 AAAI Fall Symp .

Lespérance, Y., Levesque, H. J., Lin, F., Marcu, D., Reiter, R., and Scherl, R. B. (1994). A logical approach to high-level robot programming: A progress report. In B. Kuipers, editor, Control of the Physical World by Intelligent Systems: Papers from 1994 AAAI Fall Symp ..

Levesque, H., Reiter, R., Lespérance, Y., Lin, F., and Scherl, R. (1997). GOLOG: a logic programming language for dynamic domains. J. of Logic Programming, Special Issue on Actions , 31(1–3):59–83.

Lifschitz, V. (1987). On the semantics of STRIPS. In Georgeff, M. and Lansky, A., editors, Reasoning about Actions and Plans , pages 1–9. Morgan Kaufmann, Morgan Kaufmann, CA.

Lin, F. (1995). Embracing causality in specifying the indirect effects of actions. In Proc. IJCAI-95 , pages 1985–1991.

Lobo, J., Minker, J., and Rajasekar, A. (1992). Foundations of Disjunctive Logic Programming . MIT Press.

Loveland, D. (1978). Automated Theorem Proving: A Logical Basis . North-Holland Publishing Co.

Loveland, D. (1999). Automated deduction: Looking ahead. AI Magazine , 20(l):77–98.

Marek, V. and Truszczyński, M. (1993). Nonmonotonic Logic: Context-Dependent Reasoning . Springer-Verlag.

McCain, N. and Turner, H. (1997). Causal theories of action and change. In Proc. AAAI-97 , pages 460–465.

McCarthy, J. (1959). Programs with commonsense. In Proc. Teddington Conf. on the Mechanisation of Thought Processes , pages 75–91, London. Her Majesty’s Stationery Office. Reprinted (with an added section on ‘Situations, Actions and Causal Laws’) in Semantic Information Processing , ed. M. Minsky (Cambridge, MA: MIT Press (1963)).

McCarthy, J. (1963). Situations, actions and causal laws. Memo 2. AI Laboratory, Stanford University, Stanford, CA.

McCarthy, J. (1977). Epistemological problems in artificial intelligence. In Proc. 5 th International Conference on Artificial Intelligence , pages 1038–1044.

McCarthy, J. (1978). History of lisp. In Wexblatt, R., editor, History of Programming Languages: Proc. of the ACM SIGPLAN Conf. , pages 3–57. Academic Press. Published in 1981 (Conf. date: 1978).

McCarthy, J. (1980). Circumscription — a form of non-monotonic reasoning. Artificial Intelligence , 13(1 and 2):27–39.

McCarthy, J. and Hayes, P. (1969b). Some philosophical problems from the standpoint of artificial intelligence. In Meltzer, B. and Michie, D., editors, Machine Intelligence 4 , pages 463–502. Edinburgh University Press.

McCune, W. (1997). Solution of the Robbins problem. J. Automated Reasoning , 19(3):263–276.

Minker, J. (1988). Perspectives in deductive databases. Journal of Logic Programming , 5:33–60.

Minker, J. (1993). An overview of nonmonotonic reasoning and logic programming. Journal of Logic Programming , 17(2, 3 and 4):95–126.

Minker, J. (1994). Overview of disjunctive logic programming. Journal of Artificial Intelligence & Mathematics , 12(1–2): 1–24.

Minker, J. (1996). Logic and databases: a 20 year retrospective. In Pedreschi, D. and Zaniolo, C., editors, Logic in Databases , pages 3–57. Springer. Proc. Int. Workshop LID’96, San Miniato, Italy.

Minker, J. (1999a). Logic and databases: a 20 year retrospective — updated in honor of Ray Reiter. In Levesque, H. J. and Pirri, F., editors, Logical Foundations for Cognitive Agents: Contributions in Honor of Ray Reiter , pages 234–299. Springer.

Minker, J. (1999b). The workshop on logic-based artificial intelligence. AI Magazine , 20(4):21–47.

Minsky, M. (1975). A framework for representing knowledge. In Winston, P., editor, The Psych. of Computer Vision , pages 211–277. McGraw-Hill, NY.

Moore, R. C. (1985). Semantical considerations on nonmonotonic logic. Artificial Intelligence , 25(l):75–94.

Niemela, I. and Simons, P. (1997). Smodels — an implementation of the stable model and well-founded semantics for normal logic programs. In Dix, J., Furbach, U., and Nerode, A., editors, Logic Programming and Nonmonotonic Reasoning — 4 th Int. Conf. , pages 420–429, Dagstuhl, Germany. Springer.

Nilsson, N. (1982). Principles of Artificial Intelligence . Springer-Verlag.

Nilsson, N. (1984). Shakey the robot. Technical Note 323, SRI International, Menlo Park, California.

Pednault, E. (1989). ADL: Exploring the middle ground between STRIPS and the situation calculus. In Brachman, R., Levesque, H., and Reiter, R., editors, Proc. First Int%l Conf. on Principles of Knowledge Representation and Reasoning , pages 324–332.

Peirce, C. S. (1883). A theory of probable inference. note b. the logic of relatives. In Studies in logic by members of the Johns Hopkins Univ. , pages 187–203.

Plotkin, G. (1969). A note on inductive generalisation. In Meltzer, B. and Michie, D., editors, Machine Intelligence 5 , pages 153–163. Edinburgh University Press, Edinburgh.

Plotkin, G. (1971). Automatic Methods of Inductive Inference . PhD thesis, Edinburgh University.

Przymusinski, T. C. (1988). On the declarative semantics of deductive databases and logic programming. In Minker, J., editor, Foundations of Deductive Databases and Logic Programming , chapter 5, pages 193–216. Morgan Kaufmann Pub., Washington, D.C.

Quillian, R. (1968). Semantic Memory. In Minsky, M., editor, Semantic Information Processing , pages 216–270. MIT Press, Cambridge, Massachusetts.

Rao, P., Sagonas, K., Swift, T., Warren, D., and Friere, J. (1997). XSB: A system for efficiently computing well-founded semantics. In Dix, J., Ferbach, U., and Nerode, A., editors, Logic and Nonmonotonic Reasoning — 4 t h Int’l . Conf., LPNMR’97 , pages 430-440.

Reiter, R. (1980). A Logic for Default Reasoning. Artificial Intelligence , 13(1 and 2):81–132.

Reiter, R. (1991). The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression. In Lifschitz, V., editor, AI and Mathematical Theory of Computation: Papers in Honor of John McCarthy , pages 359–380. Academic Press.

Reiter, R. (1993). Proving properties of states in the situation calculus. Artificial Intelligence , 64:337–351.

Robinson, J. A. (1965). A machine oriented logic based on the resolution principle. Journal of the ACM , 12:23–41.

Russell, S. and Norvig, P. (1995). Artificial Intelligence: A Modern Approach . Prentice Hall.

Sandewall, E. (1995). Features and Fluents , volume 1. Oxford University Press.

Schank, R. and Abelson, R. (1977). Scripts, Plans, Goals, and Understanding . Lawrence Erlebaum.

Schubert, L. (1990). Monotonic solution of the frame problem in the situation calculus: an efficient method for worlds with fully specified actions. In Kyburg, H., Loui, R., and Carlson, G., editors, Knowledge Representation and Defeasible Reasoning , pages 23–67. Kluwer.

Shanahan, M. P. (1997). Solving the Frame Problem: A Mathematical Investigation of the Common Sense Law of Inertia . MIT Press.

van Emden, M. and Kowalski, R. (1976). The Semantics of Predicate Logic as a Programming Language. J. ACM , 23(4):733-742.

Van Gelder, A. (1988). Negation as failure using tight derivations for general logic programs. In Minker, J., editor, Found. of Deductive Databases and Logic Programming , pages 149–176. Morgan Kaufmann.

Van Gelder, A., Ross, K., and Schlipf, J. (1988). Unfounded sets and well-founded semantics for general logic programs. In Proc. 7 th ACM Symp. on Principles of Database Systems. , pages 221–230.

Warren, D. S. (1999). The XSB programming system. Technical report, State University of New York at Stonybrook. http://www. cs.sunysb.edu/sbprolog/xsb-page.html .

Wooldridge, M. and Jennings, N. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review , 10(2): 115–152.

Zaniolo, C., Arni, N., and Ong, K-L. (1993). Negation and aggregates in recursive rules. Proceedings of DOOD93 , pages 204–221.

Download references

Author information

Authors and affiliations.

Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, Maryland, 20742, USA

Jack Minker

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Institute for Advanced Computer Studies and Department of Computer Science, University of Maryland, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media New York

About this chapter

Cite this chapter.

Minker, J. (2000). Introduction to Logic-Based Artificial Intelligence. In: Minker, J. (eds) Logic-Based Artificial Intelligence. The Springer International Series in Engineering and Computer Science, vol 597. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-1567-8_1

Download citation

DOI : https://doi.org/10.1007/978-1-4615-1567-8_1

Publisher Name : Springer, Boston, MA

Print ISBN : 978-1-4613-5618-9

Online ISBN : 978-1-4615-1567-8

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Live Law

  • Artificial Intelligence In Patent...

Artificial Intelligence In Patent Law

Gaurav sharma.

28 March 2024 7:11 AM GMT

Artificial Intelligence In Patent Law

Changing scenarios and the recent growth in technologies have shown the problem-solving capabilities in almost every intractable issue with an introduction of Artificial Intelligence (AI). However, navigating its risk pose a great challenge in the present time. It has the potential to radically alter almost all sphere of human life. In view thereof, this article will address...

Changing scenarios and the recent growth in technologies have shown the problem-solving capabilities in almost every intractable issue with an introduction of Artificial Intelligence (AI). However, navigating its risk pose a great challenge in the present time. It has the potential to radically alter almost all sphere of human life. In view thereof, this article will address the significant constraints to prevent its application that holds several dangers for the society as well as in the field of Intellectual Property Law.

Starting from the recent case of UK Supreme Court in Thaler v. Comptroller General (2023) UKSC 49, wherein, the issue came before the Court in light of granting patents to machine made inventions. This case concerns two British patent applications for two inventions that Dr. Thaler stated in his application, were created by an AI machine known as DABUS. Pursuant to Section – 13(2) of 1977 Act, the UK Intellectual Property Office (UKIPO) refused the applications on the ground that DABUS is not a person, as envisaged by sections 7 and 13 of the 1977 Act. Thereafter, this decision of UKIPO was challenged by way of appeal before the High Court and the Court of Appeal respectively. However, both these appeals were dismissed. Subsequently, when the case was presented in the Supreme Court, while dealing with such issue, the Court has explicitly affirmed that a machine invented or devised product or process cannot be patented. But is that enough? Gradual shift in geo-politics and the emergence of new world order demands a comprehensive model to engage with relevant stakeholders for mitigating the fundamental challenges and complexities that are attached to AI generated ideas in the advanced digital-age environment. Just ascertaining the inventor in AI invention is not sufficient. The threat and intricacies which AI faces at the moment is immensely worrisome. Online platforms and networks have become a crucial part of everyday life. Making use of AI on such space can have serious implications. Therefore, it is important to ensure that the changes brought about by these developments do not have the scope to disrupt the very foundation of such change.

What is Artificial Intelligence?

Though, there is no fixed definition but it simply means “intellectual ability developed artificially.” Through this, a computer system or the robotic system is created and processed to run on the basis of same logic, on which, the human brain works. In other words, it is a machine-software combination.

What It Takes To Be Qualified As Inventor In India Under Patent Law?

Section – 2 and Section – 6 of the Indian Patent Act, 1970 enumerates the criteria for recognizing an inventor and the person as an applicant who can file for a patent in Indian Jurisdiction. Currently, it excludes, among other things, business method inventions or a computer programs “per se” or algorithms from patentability (as per Section – 3 (k)). Moreover, Section – 3(k) has a long legislative history. In case of Ferid Allani v. Union of India and Ors (2019) SCC Online Del 11867, the Delhi High Court has categorically held that in today's digital world, when most inventions are based on computer programs, it would be retrograde to argue that all such inventions would not be patentable. The bar on patenting is in respect of 'computer programs per see…' and not all inventions based on computer programs.

Additionally, Section – 2 (1)(s) of the Patent Act states that the person who is filing for a patent can either be a natural person or any Govt Organization. However, the definition of person is not just limited to the aforesaid meaning. It can also include anybody or anything construed as the first and true inventor of the invention as a 'person' for whom a patent application is being submitted. Nonetheless, the recent objections to AI by the Controller General of Patents suggest that recognizing AI as an inventor still fall short of being claimed as inventorship rights in India.

Tests of Patentability

Determining the requirement for a particular patent may at times appears complex. However, the basic condition i.e., mixed question of law and fact remains the backbone for making any subject known and attributable under the Patents Act. These tests of patentability have been demystified by the Hon'ble Justice Pratibha M. Singh in her book on Patent Law. They are as follows:

  • Is the invention new – i.e., Is there any cognitive leap over and above what existed before?
  • Would a person working on the respective subject/ area/ field be able to arrive at the invention without too much effort?
  • Can it be put to some use?

Relevant Reports

In 2021, the Government of India has constituted the Parliamentary Standing Committee under the Department of Commerce to review the Intellectual Property Rights regime, wherein, in particular, it examined the challenges that presently exist in the current legislative framework of the country. In this regard, the U.S. Chambers of Commerce in its 10 th edition of the International IP index, has admired the 161 st Report as a 'welcome development' and 'a first major attempt at assessing the India's IP policy regime'. The observations /recommendations related to AI in Intellectual Property Rights (IPR) in para's 3.8 and 3.12 are quoted herein below:

“……The Committee is of the view that the increase in application of Artificial Intelligence (AI) based tools such as Aarogyasetu, CoWin, etc. in recent times for utilizing and extending essential services implies the likely surge in AI based patent filings in the days to come. Hence, granting proprietary rights to AI innovators and protecting AI driven innovations by enforcing regulations and standards in the country should be the way forward. The Committee, therefore, recommends that the department should channelise efforts to encourage and empower AI innovators by enacting suitable legislations or modifying the existing laws on IPR in order to accommodate AI based inventions….” (para 3.8)

…….  “The Committee notes that the dissolution of IPAB would lead to transferring of all IP-related appeals including the pending cases to High Courts and Commercial Courts (in copyrights matters). This may create additional burden on such courts which are already reeling under huge backlog of cases with inadequate expertise in hand to deal with IPR matters. It, therefore, opines that establishing an Intellectual Property Division (IPD) with dedicated IP benches as done by Delhi High Court in the wake of abolition of IPAB would ensure effective resolution of IPR cases on a timely basis. The Committee, therefore, recommends that the Government should take appropriate measures to encourage setting up of IPD in High Courts for providing alternative solution to resolve IPR cases.” (para 3.12)

Way Forward

Takeaway from this should be to develop strong safety nets as we move towards witnessing a transformative global advancement in AI-generated tools. Considering its ethical implications is a key to ensure AI-based tools are developed and used responsibly. Furthermore, investments in research & development must be increased in order to facilitate advanced tech-learning and market driven solutions in this dynamic environment by directing the bigger companies to contribute their certain percentage of shares of profit towards corporate social responsibility and processing to allocate those funds in every Govt. Department and Research bodies. Moreover, promoting IP financing is another prospect of discussion, which should be timely assessed to use such approach in availing the capitals by using IP as collateral for Banks. Additionally, it is also suggested that primary concerns like job displacement, proliferation of misinformation and preservation of research integrity have to re-engineered to incorporate AI alongside humans. Working on all these measures will ensure to make data and algorithms more accountable by creating supportive institutional infrastructure coupled with legislative modifications and amendments, that can enable better transparency and give protection to innovations by initiating safeguards in its creation and deployment.

In view thereof, it is proposed to establish a comprehensive model considering the human-led-development-growth to make it efficient, sustainable and user-friendly in the era of tech sphere that are about to witness a new innovation and increased automated occupations in the history of human race. Though, there have been many changes in patent law regime in India, which led to an improvement of India's ranking in the Global Innovation Index. But we still need stimulus to compete with nations like US and China.

The author is an Advocate on Record at Supreme Court of India. Views are personal.

introduction to problem solving in artificial intelligence

IMAGES

  1. Problem Solving Techniques in Artificial Intelligence (AI)

    introduction to problem solving in artificial intelligence

  2. Problem Solving Methods In (AI ) # Artificial Intelligence Lecture 11

    introduction to problem solving in artificial intelligence

  3. Problem Solving Agents in Artificial Intelligence

    introduction to problem solving in artificial intelligence

  4. Artificial Intelligence and Problem Solving

    introduction to problem solving in artificial intelligence

  5. AI Problem Solving

    introduction to problem solving in artificial intelligence

  6. Problem Formulation & Method Solving in Artificial Intelligence (AI)

    introduction to problem solving in artificial intelligence

VIDEO

  1. What is State Space Search

  2. problem solving

  3. State Space Search and Problem Solving in Artificial Intelligence

  4. Problem Formulation-Artificial Intelligence-Unit-1-Problem Solving

  5. What is Artificial Intelligence

  6. Propositional logic

COMMENTS

  1. Problem Solving in Artificial Intelligence

    The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem. We can also say that a problem-solving agent is a result-driven agent and always ...

  2. An Introduction to Problem-Solving using Search Algorithms for Beginners

    Problem Solving Techniques. In artificial intelligence, problems can be solved by using searching algorithms, evolutionary computations, knowledge representations, etc. In this article, I am going to discuss the various searching techniques that are used to solve a problem. In general, searching is referred to as finding information one needs.

  3. Problem Solving Techniques in AI

    Artificial intelligence (AI) problem-solving often involves investigating potential solutions to problems through reasoning techniques, making use of polynomial and differential equations, and carrying them out and use modelling frameworks. A same issue has a number of solutions, that are all accomplished using an unique algorithm.

  4. Introduction to Problem-Solving in AI

    When one mentions Artificial Intelligence, it often conjures images of futuristic robots or advanced systems that mimic human-like characteristics. ... and an occasional dash of intuition. The introduction to problem-solving in AI reveals that it involves a range of algorithms and methodologies designed to achieve specific objectives, predict ...

  5. What is Artificial Intelligence (AI)?

    What is AI? Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. On its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention.

  6. What Is Artificial Intelligence? Definition, Uses, and Types

    Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and ...

  7. Problem Solving in Artificial Intelligence

    Problem-solving methods in Artificial Intelligence. Let us discuss the techniques like Heuristics, Algorithms, Root cause analysis used by AI as problem-solving methods to find a desirable solution for the given problem. 1. ALGORITHMS. A problem-solving algorithm can be said as a procedure that is guaranteed to solve if its steps are strictly ...

  8. Introduction to Artificial Intelligence (AI)

    There are 4 modules in this course. In this course you will learn what Artificial Intelligence (AI) is, explore use cases and applications of AI, understand AI concepts and terms like machine learning, deep learning and neural networks. You will be exposed to various issues and concerns surrounding AI such as ethics and bias, & jobs, and get ...

  9. A Short Introduction to Artificial Intelligence: Methods, Success

    This is a standard textbook on artificial intelligence, which comprises 7 parts (artificial intelligence; problem-solving; knowledge, reasoning, and planning; uncertain knowledge and reasoning; machine learning; communicating, perceiving, and acting; conclusions) on more than one thousand pages.

  10. CS 343: Artificial Intelligence

    Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig Lecture Slides . Introduction to Artificial Intelligence (State-of-Art PPT file) Problem Solving and Uninformed Search; Heuristic Search; Game Playing; Knowledge Representation, Reasoning, and Propositional Logic;

  11. Artificial Intelligence

    This course introduces students to the basic knowledge representation, problem solving, and learning methods of artificial intelligence. Upon completion of 6.034, students should be able to develop intelligent systems by assembling solutions to concrete computational problems; understand the role of knowledge representation, problem solving, and learning in intelligent-system engineering; and ...

  12. What is AI (Artificial Intelligence)?

    Applied AI—simply, artificial intelligence applied to real-world problems—has serious implications for the business world.By using artificial intelligence, companies have the potential to make business more efficient and profitable. But ultimately, the value of artificial intelligence isn't in the systems themselves but in how companies use those systems to assist humans—and their ...

  13. Solving Search Problems in Artificial Intelligence

    In the realm of artificial intelligence (AI), solving search problems is a fundamental aspect. It involves finding a sequence of actions that transitions from an initial state to a goal state ...

  14. Introduction to Artificial Intelligence

    1 What Is AI. The term Artificial Intelligence (AI) is a branch of computer science to make computers perform human-like tasks, and thus, computers can appropriately sense and learn inputs for perception, knowledge representation, reasoning, problem-solving, and planning. Various types of innovative AI technologies are designed to imitate the ...

  15. PDF AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE

    2 • Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals, such as "learning" and "problem solving. . In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and

  16. Problem solving using artificial intelligence techniques

    This paper reviews the area of problem solving in the field of Artificial Intelligence. This includes problem representation for computation, "weak" methods of searching for a problems solution ...

  17. Artificial Intelligence and Problem

    Introduction to Artificial Intelligence and Problem Solving. Free online course on the basics and fundamentals of artificial intelligence, alongside its impact in problem solving. This free online course will be particularly useful for students, professionals and individuals who have a background in computer programming and computer science ...

  18. Introduction to Artificial Intelligence

    The chapter introduces natural and artificial intelligence characteristics such as non-algorithmic approach, use of heuristics, self-learning, and ability to handle partial inputs along with various types of artificial intelligence and application areas. ... Further, decision making and problem-solving processes instrumented by intelligence do ...

  19. PDF CS 440 Introduction to Artificial Intelligence

    making, problem solving, learning" (Bellman, 1978) "The study of mental faculties through the use of computational models." (Winston, 1992) "The art of creating machines that perform functions that require intelligence when performed by people" (Kurzweil, 1990) "AI is concerned with rational action... and studies the design of

  20. Artificial intelligence

    Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems, as opposed to the natural intelligence of living beings.It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their ...

  21. UNIT I PROBLEM SOLVING Introduction to AI

    Problem-solving agents. In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. Problem- solving agents are the goal-based agents and use atomic representation.

  22. Artificial Intelligence: With an Introduction to Machine

    This fully revised and expanded update, Artificial Intelligence: With an Introduction to Machine Learning, Second Edition, retains the same accessibility and problem-solving approach, while providing new material and methods. The book is divided into five sections that focus on the most useful techniques that have emerged from AI.

  23. Introduction to Logic-Based Artificial Intelligence

    In this chapter I provide a brief introduction to the field of Logic-Based Artificial Intelligence (LBAI).I then discuss contributions to LBAI contained in the chapters and some of the highlights that took place at the Workshop on LBAI from which the papers are drawn. The areas of LBAI represented in the book are: commonsense reasoning; knowledge representation; nonmonotonic reasoning ...

  24. Introduction to Artificial Intelligence for GATE 2025

    Understand the concept of Introduction to Artificial Intelligence for GATE 2025 with GATE - CSIT, DSAI & Placements course curated by Viomesh Kumar Singh on Unacademy. The DS & AI course is delivered in Hinglish. ... Problem Solving in C programming 1. Pankaj Sharma. Mar 27, 2024 • 1h 20m. 19. GATE CS IT Mini Mock Test - Moderate Level 20 ...

  25. Artificial Intelligence In Patent Law

    Changing scenarios and the recent growth in technologies have shown the problem-solving capabilities in almost every intractable issue with an introduction of Artificial Intelligence (AI).