Logo for TRU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

8.2 Problem-Solving: Heuristics and Algorithms

Learning objectives.

  • Describe the differences between heuristics and algorithms in information processing.

When faced with a problem to solve, should you go with intuition or with more measured, logical reasoning? Obviously, we use both of these approaches. Some of the decisions we make are rapid, emotional, and automatic. Daniel Kahneman (2011) calls this “fast” thinking. By definition, fast thinking saves time. For example, you may quickly decide to buy something because it is on sale; your fast brain has perceived a bargain, and you go for it quickly. On the other hand, “slow” thinking requires more effort; applying this in the same scenario might cause us not to buy the item because we have reasoned that we don’t really need it, that it is still too expensive, and so on. Using slow and fast thinking does not guarantee good decision-making if they are employed at the wrong time. Sometimes it is not clear which is called for, because many decisions have a level of uncertainty built into them. In this section, we will explore some of the applications of these tendencies to think fast or slow.

We will look further into our thought processes, more specifically, into some of the problem-solving strategies that we use. Heuristics are information-processing strategies that are useful in many cases but may lead to errors when misapplied. A heuristic is a principle with broad application, essentially an educated guess about something. We use heuristics all the time, for example, when deciding what groceries to buy from the supermarket, when looking for a library book, when choosing the best route to drive through town to avoid traffic congestion, and so on. Heuristics can be thought of as aids to decision making; they allow us to reach a solution without a lot of cognitive effort or time.

The benefit of heuristics in helping us reach decisions fairly easily is also the potential downfall: the solution provided by the use of heuristics is not necessarily the best one. Let’s consider some of the most frequently applied, and misapplied, heuristics in the table below.

In many cases, we base our judgments on information that seems to represent, or match, what we expect will happen, while ignoring other potentially more relevant statistical information. When we do so, we are using the representativeness heuristic . Consider, for instance, the data presented in the table below. Let’s say that you went to a hospital, and you checked the records of the babies that were born on that given day. Which pattern of births do you think you are most likely to find?

Most people think that list B is more likely, probably because list B looks more random, and matches — or is “representative of” — our ideas about randomness, but statisticians know that any pattern of four girls and four boys is mathematically equally likely. Whether a boy or girl is born first has no bearing on what sex will be born second; these are independent events, each with a 50:50 chance of being a boy or a girl. The problem is that we have a schema of what randomness should be like, which does not always match what is mathematically the case. Similarly, people who see a flipped coin come up “heads” five times in a row will frequently predict, and perhaps even wager money, that “tails” will be next. This behaviour is known as the gambler’s fallacy . Mathematically, the gambler’s fallacy is an error: the likelihood of any single coin flip being “tails” is always 50%, regardless of how many times it has come up “heads” in the past.

The representativeness heuristic may explain why we judge people on the basis of appearance. Suppose you meet your new next-door neighbour, who drives a loud motorcycle, has many tattoos, wears leather, and has long hair. Later, you try to guess their occupation. What comes to mind most readily? Are they a teacher? Insurance salesman? IT specialist? Librarian? Drug dealer? The representativeness heuristic will lead you to compare your neighbour to the prototypes you have for these occupations and choose the one that they seem to represent the best. Thus, your judgment is affected by how much your neibour seems to resemble each of these groups. Sometimes these judgments are accurate, but they often fail because they do not account for base rates , which is the actual frequency with which these groups exist. In this case, the group with the lowest base rate is probably drug dealer.

Our judgments can also be influenced by how easy it is to retrieve a memory. The tendency to make judgments of the frequency or likelihood that an event occurs on the basis of the ease with which it can be retrieved from memory is known as the availability heuristic (MacLeod & Campbell, 1992; Tversky & Kahneman, 1973). Imagine, for instance, that I asked you to indicate whether there are more words in the English language that begin with the letter “R” or that have the letter “R” as the third letter. You would probably answer this question by trying to think of words that have each of the characteristics, thinking of all the words you know that begin with “R” and all that have “R” in the third position. Because it is much easier to retrieve words by their first letter than by their third, we may incorrectly guess that there are more words that begin with “R,” even though there are in fact more words that have “R” as the third letter.

The availability heuristic may explain why we tend to overestimate the likelihood of crimes or disasters; those that are reported widely in the news are more readily imaginable, and therefore, we tend to overestimate how often they occur. Things that we find easy to imagine, or to remember from watching the news, are estimated to occur frequently. Anything that gets a lot of news coverage is easy to imagine. Availability bias does not just affect our thinking. It can change behaviour. For example, homicides are usually widely reported in the news, leading people to make inaccurate assumptions about the frequency of murder. In Canada, the murder rate has dropped steadily since the 1970s (Statistics Canada, 2018), but this information tends not to be reported, leading people to overestimate the probability of being affected by violent crime. In another example, doctors who recently treated patients suffering from a particular condition were more likely to diagnose the condition in subsequent patients because they overestimated the prevalence of the condition (Poses & Anthony, 1991).

The anchoring and adjustment heuristic is another example of how fast thinking can lead to a decision that might not be optimal. Anchoring and adjustment is easily seen when we are faced with buying something that does not have a fixed price. For example, if you are interested in a used car, and the asking price is $10,000, what price do you think you might offer? Using $10,000 as an anchor, you are likely to adjust your offer from there, and perhaps offer $9000 or $9500. Never mind that $10,000 may not be a reasonable anchoring price. Anchoring and adjustment does not just happen when we’re buying something. It can also be used in any situation that calls for judgment under uncertainty, such as sentencing decisions in criminal cases (Bennett, 2014), and it applies to groups as well as individuals (Rutledge, 1993).

In contrast to heuristics, which can be thought of as problem-solving strategies based on educated guesses, algorithms are problem-solving strategies that use rules. Algorithms are generally a logical set of steps that, if applied correctly, should be accurate. For example, you could make a cake using heuristics — relying on your previous baking experience and guessing at the number and amount of ingredients, baking time, and so on — or using an algorithm. The latter would require a recipe which would provide step-by-step instructions; the recipe is the algorithm. Unless you are an extremely accomplished baker, the algorithm should provide you with a better cake than using heuristics would. While heuristics offer a solution that might be correct, a correctly applied algorithm is guaranteed to provide a correct solution. Of course, not all problems can be solved by algorithms.

As with heuristics, the use of algorithmic processing interacts with behaviour and emotion. Understanding what strategy might provide the best solution requires knowledge and experience. As we will see in the next section, we are prone to a number of cognitive biases that persist despite knowledge and experience.

Key Takeaways

  • We use a variety of shortcuts in our information processing, such as the representativeness, availability, and anchoring and adjustment heuristics. These help us to make fast judgments but may lead to errors.
  • Algorithms are problem-solving strategies that are based on rules rather than guesses. Algorithms, if applied correctly, are far less likely to result in errors or incorrect solutions than heuristics. Algorithms are based on logic.

Bennett, M. W. (2014). Confronting cognitive ‘anchoring effect’ and ‘blind spot’ biases in federal sentencing: A modest solution for reforming and fundamental flaw. Journal of Criminal Law and Criminology , 104 (3), 489-534.

Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

MacLeod, C., & Campbell, L. (1992). Memory accessibility and probability judgments: An experimental evaluation of the availability heuristic.  Journal of Personality and Social Psychology, 63 (6), 890–902.

Poses, R. M., & Anthony, M. (1991). Availability, wishful thinking, and physicians’ diagnostic judgments for patients with suspected bacteremia.  Medical Decision Making,  11 , 159-68.

Rutledge, R. W. (1993). The effects of group decisions and group-shifts on use of the anchoring and adjustment heuristic. Social Behavior and Personality, 21 (3), 215-226.

Statistics Canada. (2018). Ho micide in Canada, 2017 . Retrieved from https://www150.statcan.gc.ca/n1/en/daily-quotidien/181121/dq181121a-eng.pdf

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability.  Cognitive Psychology, 5 , 207–232.

Psychology - 1st Canadian Edition Copyright © 2020 by Sally Walters is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

39 8.2 Problem-Solving: Heuristics and Algorithms

Learning objectives.

  • Describe the differences between heuristics and algorithms in information processing.

When faced with a problem to solve, should you go with intuition or with more measured, logical reasoning? Obviously, we use both of these approaches. Some of the decisions we make are rapid, emotional, and automatic. Daniel Kahneman (2011) calls this “fast” thinking. By definition, fast thinking saves time. For example, you may quickly decide to buy something because it is on sale; your fast brain has perceived a bargain, and you go for it quickly. On the other hand, “slow” thinking requires more effort; applying this in the same scenario might cause us not to buy the item because we have reasoned that we don’t really need it, that it is still too expensive, and so on. Using slow and fast thinking does not guarantee good decision-making if they are employed at the wrong time. Sometimes it is not clear which is called for, because many decisions have a level of uncertainty built into them. In this section, we will explore some of the applications of these tendencies to think fast or slow.

We will look further into our thought processes, more specifically, into some of the problem-solving strategies that we use. Heuristics are information-processing strategies that are useful in many cases but may lead to errors when misapplied. A heuristic is a principle with broad application, essentially an educated guess about something. We use heuristics all the time, for example, when deciding what groceries to buy from the supermarket, when looking for a library book, when choosing the best route to drive through town to avoid traffic congestion, and so on. Heuristics can be thought of as aids to decision making; they allow us to reach a solution without a lot of cognitive effort or time.

The benefit of heuristics in helping us reach decisions fairly easily is also the potential downfall: the solution provided by the use of heuristics is not necessarily the best one. Let’s consider some of the most frequently applied, and misapplied, heuristics in the table below.

In many cases, we base our judgments on information that seems to represent, or match, what we expect will happen, while ignoring other potentially more relevant statistical information. When we do so, we are using the representativeness heuristic . Consider, for instance, the data presented in the table below. Let’s say that you went to a hospital, and you checked the records of the babies that were born on that given day. Which pattern of births do you think you are most likely to find?

Most people think that list B is more likely, probably because list B looks more random, and matches — or is “representative of” — our ideas about randomness, but statisticians know that any pattern of four girls and four boys is mathematically equally likely. Whether a boy or girl is born first has no bearing on what sex will be born second; these are independent events, each with a 50:50 chance of being a boy or a girl. The problem is that we have a schema of what randomness should be like, which does not always match what is mathematically the case. Similarly, people who see a flipped coin come up “heads” five times in a row will frequently predict, and perhaps even wager money, that “tails” will be next. This behaviour is known as the gambler’s fallacy . Mathematically, the gambler’s fallacy is an error: the likelihood of any single coin flip being “tails” is always 50%, regardless of how many times it has come up “heads” in the past.

The representativeness heuristic may explain why we judge people on the basis of appearance. Suppose you meet your new next-door neighbour, who drives a loud motorcycle, has many tattoos, wears leather, and has long hair. Later, you try to guess their occupation. What comes to mind most readily? Are they a teacher? Insurance salesman? IT specialist? Librarian? Drug dealer? The representativeness heuristic will lead you to compare your neighbour to the prototypes you have for these occupations and choose the one that they seem to represent the best. Thus, your judgment is affected by how much your neibour seems to resemble each of these groups. Sometimes these judgments are accurate, but they often fail because they do not account for base rates , which is the actual frequency with which these groups exist. In this case, the group with the lowest base rate is probably drug dealer.

Our judgments can also be influenced by how easy it is to retrieve a memory. The tendency to make judgments of the frequency or likelihood that an event occurs on the basis of the ease with which it can be retrieved from memory is known as the availability heuristic (MacLeod & Campbell, 1992; Tversky & Kahneman, 1973). Imagine, for instance, that I asked you to indicate whether there are more words in the English language that begin with the letter “R” or that have the letter “R” as the third letter. You would probably answer this question by trying to think of words that have each of the characteristics, thinking of all the words you know that begin with “R” and all that have “R” in the third position. Because it is much easier to retrieve words by their first letter than by their third, we may incorrectly guess that there are more words that begin with “R,” even though there are in fact more words that have “R” as the third letter.

The availability heuristic may explain why we tend to overestimate the likelihood of crimes or disasters; those that are reported widely in the news are more readily imaginable, and therefore, we tend to overestimate how often they occur. Things that we find easy to imagine, or to remember from watching the news, are estimated to occur frequently. Anything that gets a lot of news coverage is easy to imagine. Availability bias does not just affect our thinking. It can change behaviour. For example, homicides are usually widely reported in the news, leading people to make inaccurate assumptions about the frequency of murder. In Canada, the murder rate has dropped steadily since the 1970s (Statistics Canada, 2018), but this information tends not to be reported, leading people to overestimate the probability of being affected by violent crime. In another example, doctors who recently treated patients suffering from a particular condition were more likely to diagnose the condition in subsequent patients because they overestimated the prevalence of the condition (Poses & Anthony, 1991).

The anchoring and adjustment heuristic is another example of how fast thinking can lead to a decision that might not be optimal. Anchoring and adjustment is easily seen when we are faced with buying something that does not have a fixed price. For example, if you are interested in a used car, and the asking price is $10,000, what price do you think you might offer? Using $10,000 as an anchor, you are likely to adjust your offer from there, and perhaps offer $9000 or $9500. Never mind that $10,000 may not be a reasonable anchoring price. Anchoring and adjustment does not just happen when we’re buying something. It can also be used in any situation that calls for judgment under uncertainty, such as sentencing decisions in criminal cases (Bennett, 2014), and it applies to groups as well as individuals (Rutledge, 1993).

In contrast to heuristics, which can be thought of as problem-solving strategies based on educated guesses, algorithms are problem-solving strategies that use rules. Algorithms are generally a logical set of steps that, if applied correctly, should be accurate. For example, you could make a cake using heuristics — relying on your previous baking experience and guessing at the number and amount of ingredients, baking time, and so on — or using an algorithm. The latter would require a recipe which would provide step-by-step instructions; the recipe is the algorithm. Unless you are an extremely accomplished baker, the algorithm should provide you with a better cake than using heuristics would. While heuristics offer a solution that might be correct, a correctly applied algorithm is guaranteed to provide a correct solution. Of course, not all problems can be solved by algorithms.

As with heuristics, the use of algorithmic processing interacts with behaviour and emotion. Understanding what strategy might provide the best solution requires knowledge and experience. As we will see in the next section, we are prone to a number of cognitive biases that persist despite knowledge and experience.

Key Takeaways

  • We use a variety of shortcuts in our information processing, such as the representativeness, availability, and anchoring and adjustment heuristics. These help us to make fast judgments but may lead to errors.
  • Algorithms are problem-solving strategies that are based on rules rather than guesses. Algorithms, if applied correctly, are far less likely to result in errors or incorrect solutions than heuristics. Algorithms are based on logic.

Bennett, M. W. (2014). Confronting cognitive ‘anchoring effect’ and ‘blind spot’ biases in federal sentencing: A modest solution for reforming and fundamental flaw. Journal of Criminal Law and Criminology , 104 (3), 489-534.

Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

MacLeod, C., & Campbell, L. (1992). Memory accessibility and probability judgments: An experimental evaluation of the availability heuristic.  Journal of Personality and Social Psychology, 63 (6), 890–902.

Poses, R. M., & Anthony, M. (1991). Availability, wishful thinking, and physicians’ diagnostic judgments for patients with suspected bacteremia.  Medical Decision Making,  11 , 159-68.

Rutledge, R. W. (1993). The effects of group decisions and group-shifts on use of the anchoring and adjustment heuristic. Social Behavior and Personality, 21 (3), 215-226.

Statistics Canada. (2018). Ho micide in Canada, 2017 . Retrieved from https://www150.statcan.gc.ca/n1/en/daily-quotidien/181121/dq181121a-eng.pdf

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability.  Cognitive Psychology, 5 , 207–232.

Psychology - 1st Canadian Edition Copyright © 2020 by Sally Walters is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

7.3 Problem Solving

Learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving and decision making

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

Problem-Solving Strategies

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( Table 7.2 ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Everyday Connection

Solving puzzles.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( Figure 7.7 ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

Here is another popular type of puzzle ( Figure 7.8 ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

Take a look at the “Puzzling Scales” logic puzzle below ( Figure 7.9 ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

Pitfalls to Problem Solving

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but they just need to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. Duncker (1945) conducted foundational research on functional fixedness. He created an experiment in which participants were given a candle, a book of matches, and a box of thumbtacks. They were instructed to use those items to attach the candle to the wall so that it did not drip wax onto the table below. Participants had to use functional fixedness to overcome the problem ( Figure 7.10 ). During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Link to Learning

Check out this Apollo 13 scene about NASA engineers overcoming functional fixedness to learn more.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 7.3 .

Watch this teacher-made music video about cognitive biases to learn more.

Were you able to determine how many marbles are needed to balance the scales in Figure 7.9 ? You need nine. Were you able to solve the problems in Figure 7.7 and Figure 7.8 ? Here are the answers ( Figure 7.11 ).

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • Authors: Rose M. Spielman, William J. Jenkins, Marilyn D. Lovett
  • Publisher/website: OpenStax
  • Book title: Psychology 2e
  • Publication date: Apr 22, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/psychology-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/psychology-2e/pages/7-3-problem-solving

© Jan 6, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

4 Main problem-solving strategies

Photo of author

In Psychology, you get to read about a ton of therapies. It’s mind-boggling how different theorists have looked at human nature differently and have come up with different, often somewhat contradictory, theoretical approaches.

Yet, you can’t deny the kernel of truth that’s there in all of them. All therapies, despite being different, have one thing in common- they all aim to solve people’s problems. They all aim to equip people with problem-solving strategies to help them deal with their life problems.

Problem-solving is really at the core of everything we do. Throughout our lives, we’re constantly trying to solve one problem or another. When we can’t, all sorts of psychological problems take hold. Getting good at solving problems is a fundamental life skill.

Problem-solving stages

What problem-solving does is take you from an initial state (A) where a problem exists to a final or goal state (B), where the problem no longer exists.

To move from A to B, you need to perform some actions called operators. Engaging in the right operators moves you from A to B. So, the stages of problem-solving are:

  • Initial state

The problem itself can either be well-defined or ill-defined. A well-defined problem is one where you can clearly see where you are (A), where you want to go (B), and what you need to do to get there (engaging the right operators).

For example, feeling hungry and wanting to eat can be seen as a problem, albeit a simple one for many. Your initial state is hunger (A) and your final state is satisfaction or no hunger (B). Going to the kitchen and finding something to eat is using the right operator.

In contrast, ill-defined or complex problems are those where one or more of the three problem solving stages aren’t clear. For example, if your goal is to bring about world peace, what is it exactly that you want to do?

It’s been rightly said that a problem well-defined is a problem half-solved. Whenever you face an ill-defined problem, the first thing you need to do is get clear about all the three stages.

Often, people will have a decent idea of where they are (A) and where they want to be (B). What they usually get stuck on is finding the right operators.

Initial theory in problem-solving

When people first attempt to solve a problem, i.e. when they first engage their operators, they often have an initial theory of solving the problem. As I mentioned in my article on overcoming challenges for complex problems, this initial theory is often wrong.

But, at the time, it’s usually the result of the best information the individual can gather about the problem. When this initial theory fails, the problem-solver gets more data, and he refines the theory. Eventually, he finds an actual theory i.e. a theory that works. This finally allows him to engage the right operators to move from A to B.

Problem-solving strategies

These are operators that a problem solver tries to move from A to B. There are several problem-solving strategies but the main ones are:

  • Trial and error

1. Algorithms

When you follow a step-by-step procedure to solve a problem or reach a goal, you’re using an algorithm. If you follow the steps exactly, you’re guaranteed to find the solution. The drawback of this strategy is that it can get cumbersome and time-consuming for large problems.

Say I hand you a 200-page book and ask you to read out to me what’s written on page 100. If you start from page 1 and keep turning the pages, you’ll eventually reach page 100. There’s no question about it. But the process is time-consuming. So instead you use what’s called a heuristic.

2. Heuristics

Heuristics are rules of thumb that people use to simplify problems. They’re often based on memories from past experiences. They cut down the number of steps needed to solve a problem, but they don’t always guarantee a solution. Heuristics save us time and effort if they work.

You know that page 100 lies in the middle of the book. Instead of starting from page one, you try to open the book in the middle. Of course, you may not hit page 100, but you can get really close with just a couple of tries.

If you open page 90, for instance, you can then algorithmically move from 90 to 100. Thus, you can use a combination of heuristics and algorithms to solve the problem. In real life, we often solve problems like this.

When police are looking for suspects in an investigation, they try to narrow down the problem similarly. Knowing the suspect is 6 feet tall isn’t enough, as there could be thousands of people out there with that height.

Knowing the suspect is 6 feet tall, male, wears glasses, and has blond hair narrows down the problem significantly.

3. Trial and error

When you have an initial theory to solve a problem, you try it out. If you fail, you refine or change your theory and try again. This is the trial-and-error process of solving problems. Behavioral and cognitive trial and error often go hand in hand, but for many problems, we start with behavioural trial and error until we’re forced to think.

Say you’re in a maze, trying to find your way out. You try one route without giving it much thought and you find it leads to nowhere. Then you try another route and fail again. This is behavioural trial and error because you aren’t putting any thought into your trials. You’re just throwing things at the wall to see what sticks.

This isn’t an ideal strategy but can be useful in situations where it’s impossible to get any information about the problem without doing some trials.

Then, when you have enough information about the problem, you shuffle that information in your mind to find a solution. This is cognitive trial and error or analytical thinking. Behavioral trial and error can take a lot of time, so using cognitive trial and error as much as possible is advisable. You got to sharpen your axe before you cut the tree.

When solving complex problems, people get frustrated after having tried several operators that didn’t work. They abandon their problem and go on with their routine activities. Suddenly, they get a flash of insight that makes them confident they can now solve the problem.

I’ve done an entire article on the underlying mechanics of insight . Long story short, when you take a step back from your problem, it helps you see things in a new light. You make use of associations that were previously unavailable to you.

You get more puzzle pieces to work with and this increases the odds of you finding a path from A to B, i.e. finding operators that work.

Pilot problem-solving

No matter what problem-solving strategy you employ, it’s all about finding out what works. Your actual theory tells you what operators will take you from A to B. Complex problems don’t reveal their actual theories easily solely because they are complex.

Therefore, the first step to solving a complex problem is getting as clear as you can about what you’re trying to accomplish- collecting as much information as you can about the problem.

This gives you enough raw materials to formulate an initial theory. We want our initial theory to be as close to an actual theory as possible. This saves time and resources.

Solving a complex problem can mean investing a lot of resources. Therefore, it is recommended you verify your initial theory if you can. I call this pilot problem-solving.

Before businesses invest in making a product, they sometimes distribute free versions to a small sample of potential customers to ensure their target audience will be receptive to the product.

Before making a series of TV episodes, TV show producers often release pilot episodes to figure out whether the show can take off.

Before conducting a large study, researchers do a pilot study to survey a small sample of the population to determine if the study is worth carrying out.

The same ‘testing the waters’ approach needs to be applied to solving any complex problem you might be facing. Is your problem worth investing a lot of resources in? In management, we’re constantly taught about Return On Investment (ROI). The ROI should justify the investment.

If the answer is yes, go ahead and formulate your initial theory based on extensive research. Find a way to verify your initial theory. You need this reassurance that you’re going in the right direction, especially for complex problems that take a long time to solve.

memories of murder movie scene

Getting your causal thinking right

Problem solving boils down to getting your causal thinking right. Finding solutions is all about finding out what works, i.e. finding operators that take you from A to B. To succeed, you need to be confident in your initial theory (If I do X and Y, they’ll lead me to B). You need to be sure that doing X and Y will lead you to B- doing X and Y will cause B.

All obstacles to problem-solving or goal-accomplishing are rooted in faulty causal thinking leading to not engaging the right operators. When your causal thinking is on point, you’ll have no problem engaging the right operators.

As you can imagine, for complex problems, getting our causal thinking right isn’t easy. That’s why we need to formulate an initial theory and refine it over time.

I like to think of problem-solving as the ability to project the present into the past or into the future. When you’re solving problems, you’re basically looking at your present situation and asking yourself two questions:

“What caused this?” (Projecting present into the past)

“What will this cause?” (Projecting present into the future)

The first question is more relevant to problem-solving and the second to goal-accomplishing.

If you find yourself in a mess , you need to answer the “What caused this?” question correctly. For the operators you’re currently engaging to reach your goal, ask yourself, “What will this cause?” If you think they cannot cause B, it’s time to refine your initial theory.

hanan parvez

Hi, I’m Hanan Parvez (MA Psychology). I’ve been writing about Psychology for 9+ years. My work has been featured in Forbes , Business Insider , Reader’s Digest , and Entrepreneur . If you have any queries, use the contact form or reach out to me on my socials.

  • Anxiety Disorder
  • Bipolar Disorder
  • Schizophrenia
  • Adjustment Disorder
  • Agoraphobia
  • Antisocial Personality Disorder
  • Borderline Personality Disorder
  • Childhood ADHD
  • Dissociative Identity Disorder
  • Narcissistic Personality Disorder
  • Oppositional Defiant Disorder
  • Panic Attack
  • Postpartum Depression
  • Schizoaffective Disorder
  • Seasonal Affective Disorder
  • Sex Addiction
  • Social Anxiety
  • Specific Phobias
  • Teenage Depression
  • Black Mental Health
  • Emotional Health
  • Sex & Relationships
  • Understanding Therapy
  • Workplace Mental Health
  • My Life with OCD
  • Caregivers Chronicles
  • Empathy at Work
  • Sex, Love & All of the Above
  • Parent Central
  • Mindful Moment
  • Mental Health News
  • Live Town Hall: Mental Health in Focus
  • Inside Mental Health
  • Inside Schizophrenia
  • Inside Bipolar
  • ADHD Symptoms Quiz
  • Anxiety Symptoms Quiz
  • Autism Quiz: Family & Friends
  • Autism Symptoms Quiz
  • Bipolar Disorder Quiz
  • Borderline Personality Test
  • Childhood ADHD Quiz
  • Depression Symptoms Quiz
  • Eating Disorder Quiz
  • Narcissim Symptoms Test
  • OCD Symptoms Quiz
  • Psychopathy Test
  • PTSD Symptoms Quiz
  • Schizophrenia Quiz
  • Attachment Style Quiz
  • Career Test
  • Do I Need Therapy Quiz?
  • Domestic Violence Screening Quiz
  • Emotional Type Quiz
  • Loneliness Quiz
  • Parenting Style Quiz
  • Personality Test
  • Relationship Quiz
  • Stress Test
  • What's Your Sleep Like?
  • Find Support
  • Suicide Prevention
  • Drugs & Medications
  • Find a Therapist

5 Effective Problem-Solving Strategies

explain algorithms and heuristics as strategies of problem solving

Got a problem you’re trying to solve? Strategies like trial and error, gut instincts, and “working backward” can help. We look at some examples and how to use them.

We all face problems daily. Some are simple, like deciding what to eat for dinner. Others are more complex, like resolving a conflict with a loved one or figuring out how to overcome barriers to your goals.

No matter what problem you’re facing, these five problem-solving strategies can help you develop an effective solution.

An infographic showing five effective problem-solving strategies

What are problem-solving strategies?

To effectively solve a problem, you need a problem-solving strategy .

If you’ve had to make a hard decision before then you know that simply ruminating on the problem isn’t likely to get you anywhere. You need an effective strategy — or a plan of action — to find a solution.

In general, effective problem-solving strategies include the following steps:

  • Define the problem.
  • Come up with alternative solutions.
  • Decide on a solution.
  • Implement the solution.

Problem-solving strategies don’t guarantee a solution, but they do help guide you through the process of finding a resolution.

Using problem-solving strategies also has other benefits . For example, having a strategy you can turn to can help you overcome anxiety and distress when you’re first faced with a problem or difficult decision.

The key is to find a problem-solving strategy that works for your specific situation, as well as your personality. One strategy may work well for one type of problem but not another. In addition, some people may prefer certain strategies over others; for example, creative people may prefer to depend on their insights than use algorithms.

It’s important to be equipped with several problem-solving strategies so you use the one that’s most effective for your current situation.

1. Trial and error

One of the most common problem-solving strategies is trial and error. In other words, you try different solutions until you find one that works.

For example, say the problem is that your Wi-Fi isn’t working. You might try different things until it starts working again, like restarting your modem or your devices until you find or resolve the problem. When one solution isn’t successful, you try another until you find what works.

Trial and error can also work for interpersonal problems . For example, if your child always stays up past their bedtime, you might try different solutions — a visual clock to remind them of the time, a reward system, or gentle punishments — to find a solution that works.

2. Heuristics

Sometimes, it’s more effective to solve a problem based on a formula than to try different solutions blindly.

Heuristics are problem-solving strategies or frameworks people use to quickly find an approximate solution. It may not be the optimal solution, but it’s faster than finding the perfect resolution, and it’s “good enough.”

Algorithms or equations are examples of heuristics.

An algorithm is a step-by-step problem-solving strategy based on a formula guaranteed to give you positive results. For example, you might use an algorithm to determine how much food is needed to feed people at a large party.

However, many life problems have no formulaic solution; for example, you may not be able to come up with an algorithm to solve the problem of making amends with your spouse after a fight.

3. Gut instincts (insight problem-solving)

While algorithm-based problem-solving is formulaic, insight problem-solving is the opposite.

When we use insight as a problem-solving strategy we depend on our “gut instincts” or what we know and feel about a situation to come up with a solution. People might describe insight-based solutions to problems as an “aha moment.”

For example, you might face the problem of whether or not to stay in a relationship. The solution to this problem may come as a sudden insight that you need to leave. In insight problem-solving, the cognitive processes that help you solve a problem happen outside your conscious awareness.

4. Working backward

Working backward is a problem-solving approach often taught to help students solve problems in mathematics. However, it’s useful for real-world problems as well.

Working backward is when you start with the solution and “work backward” to figure out how you got to the solution. For example, if you know you need to be at a party by 8 p.m., you might work backward to problem-solve when you must leave the house, when you need to start getting ready, and so on.

5. Means-end analysis

Means-end analysis is a problem-solving strategy that, to put it simply, helps you get from “point A” to “point B” by examining and coming up with solutions to obstacles.

When using means-end analysis you define the current state or situation (where you are now) and the intended goal. Then, you come up with solutions to get from where you are now to where you need to be.

For example, a student might be faced with the problem of how to successfully get through finals season . They haven’t started studying, but their end goal is to pass all of their finals. Using means-end analysis, the student can examine the obstacles that stand between their current state and their end goal (passing their finals).

They could see, for example, that one obstacle is that they get distracted from studying by their friends. They could devise a solution to this obstacle by putting their phone on “do not disturb” mode while studying.

Let’s recap

Whether they’re simple or complex, we’re faced with problems every day. To successfully solve these problems we need an effective strategy. There are many different problem-solving strategies to choose from.

Although problem-solving strategies don’t guarantee a solution, they can help you feel less anxious about problems and make it more likely that you come up with an answer.

Last medically reviewed on November 1, 2022

8 sources collapsed

  • Chu Y, et al. (2011). Human performance on insight problem-solving: A review. https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1094&context=jps
  • Dumper K, et al. (n.d.) Chapter 7.3: Problem-solving in introductory psychology. https://opentext.wsu.edu/psych105/chapter/7-4-problem-solving/
  • Foulds LR. (2017). The heuristic problem-solving approach. https://www.tandfonline.com/doi/abs/10.1057/jors.1983.205
  • Gick ML. (1986). Problem-solving strategies. https://www.tandfonline.com/doi/abs/10.1080/00461520.1986.9653026
  • Montgomery ME. (2015). Problem solving using means-end analysis. https://sites.psu.edu/psych256sp15/2015/04/19/problem-solving-using-means-end-analysis/
  • Posamentier A, et al. (2015). Problem-solving strategies in mathematics. Chapter 3: Working backwards. https://www.worldscientific.com/doi/10.1142/9789814651646_0003
  • Sarathy V. (2018). Real world problem-solving. https://www.frontiersin.org/articles/10.3389/fnhum.2018.00261/full
  • Woods D. (2000). An evidence-based strategy for problem solving. https://www.researchgate.net/publication/245332888_An_Evidence-Based_Strategy_for_Problem_Solving

Read this next

Making big decisions can be a difficult task. Setting deadlines and asking for support can help you confidently move ahead.

Dealing with a problem can fee a lot more manageable when you have a plan. Try these 5 steps for becoming a better problem-solver.

A lack of communication in relationships doesn't have to be a dealbreaker. Learn how to improve your communication skills at work and at home.

Sleep deprivation, stress, or underlying health conditions can lead to an inability to focus. Self-help techniques can help improve your concentration.

Positive thinking is an essential practice to improve your overall health and well-being. Discover how to incorporate positive thinking into your…

Dreaming about babies can hold different meanings for everyone. Although theories vary, biological and psychological factors may influence your dreams.

If you're seeking to boost your concentration, practicing mindfulness, chewing gum, and brain games are just a few techniques to try. Learn how they…

Creating a schedule and managing stress are ways to make your days go by faster. Changing your perception of time can also improve your overall…

Experiencing unwanted and difficult memories can be challenging. But learning how to replace negative memories with positive ones may help you cope.

Engaging in brain exercises, like sudoku puzzles and learning new languages, enhances cognitive abilities and improves overall well-being.

Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

Which program is right for you?

MIT Sloan Campus life

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Earn your MBA and SM in engineering with this transformative two-year program.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

An interdisciplinary program that combines engineering, management, and design, leading to a master’s degree in engineering and management.

Executive Programs

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

This 20-month MBA program equips experienced executives to enhance their impact on their organizations and the world.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

3 ways companies can scale emissions reduction

Women’s career advice: Remember that exhaustion is not a yardstick for productivity

How, and why, to run a values-based business

Credit: Alejandro Giraldo

Ideas Made to Matter

How to use algorithms to solve everyday problems

Kara Baskin

May 8, 2017

How can I navigate the grocery store quickly? Why doesn’t anyone like my Facebook status? How can I alphabetize my bookshelves in a hurry? Apple data visualizer and MIT System Design and Management graduate Ali Almossawi solves these common dilemmas and more in his new book, “ Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier ,” a quirky, illustrated guide to algorithmic thinking. 

For the uninitiated: What is an algorithm? And how can algorithms help us to think smarter?

An algorithm is a process with unambiguous steps that has a beginning and an end, and does something useful.

Algorithmic thinking is taking a step back and asking, “If it’s the case that algorithms are so useful in computing to achieve predictability, might they also be useful in everyday life, when it comes to, say, deciding between alternative ways of solving a problem or completing a task?” In all cases, we optimize for efficiency: We care about time or space.

Note the mention of “deciding between.” Computer scientists do that all the time, and I was convinced that the tools they use to evaluate competing algorithms would be of interest to a broad audience.

Why did you write this book, and who can benefit from it?

All the books I came across that tried to introduce computer science involved coding. My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms, which form the basis for the field of computing and have far-reaching applications and uses.

I wrote the book with two audiences in mind. One, anyone, be it a learner or an educator, who is interested in computer science and wants an engaging and lighthearted, but not a dumbed-down, introduction to the field. Two, anyone who is already familiar with the field and wants to experience a way of explaining some of the fundamental concepts in computer science differently than how they’re taught.

I’m going to the grocery store and only have 15 minutes. What do I do?

Do you know what the grocery store looks like ahead of time? If you know what it looks like, it determines your list. How do you prioritize things on your list? Order the items in a way that allows you to avoid walking down the same aisles twice.

For me, the intriguing thing is that the grocery store is a scene from everyday life that I can use as a launch pad to talk about various related topics, like priority queues and graphs and hashing. For instance, what is the most efficient way for a machine to store a prioritized list, and what happens when the equivalent of you scratching an item from a list happens in the machine’s list? How is a store analogous to a graph (an abstraction in computer science and mathematics that defines how things are connected), and how is navigating the aisles in a store analogous to traversing a graph?

Nobody follows me on Instagram. How do I get more followers?

The concept of links and networks, which I cover in Chapter 6, is relevant here. It’s much easier to get to people whom you might be interested in and who might be interested in you if you can start within the ball of links that connects those people, rather than starting at a random spot.

You mention Instagram: There, the hashtag is one way to enter that ball of links. Tag your photos, engage with users who tag their photos with the same hashtags, and you should be on your way to stardom.

What are the secret ingredients of a successful Facebook post?

I’ve posted things on social media that have died a sad death and then posted the same thing at a later date that somehow did great. Again, if we think of it in terms that are relevant to algorithms, we’d say that the challenge with making something go viral is really getting that first spark. And to get that first spark, a person who is connected to the largest number of people who are likely to engage with that post, needs to share it.

With [my first book], “Bad Arguments,” I spent a month pouring close to $5,000 into advertising for that project with moderate results. And then one science journalist with a large audience wrote about it, and the project took off and hasn’t stopped since.

What problems do you wish you could solve via algorithm but can’t?

When we care about efficiency, thinking in terms of algorithms is useful. There are cases when that’s not the quality we want to optimize for — for instance, learning or love. I walk for several miles every day, all throughout the city, as I find it relaxing. I’ve never asked myself, “What’s the most efficient way I can traverse the streets of San Francisco?” It’s not relevant to my objective.

Algorithms are a great way of thinking about efficiency, but the question has to be, “What approach can you optimize for that objective?” That’s what worries me about self-help: Books give you a silver bullet for doing everything “right” but leave out all the nuances that make us different. What works for you might not work for me.

Which companies use algorithms well?

When you read that the overwhelming majority of the shows that users of, say, Netflix, watch are due to Netflix’s recommendation engine, you know they’re doing something right.

Related Articles

A stack of jeans with network/AI imagery overlayed on top

Chapter 7: Thinking and Intelligence

Problem solving, learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

PROBLEM-SOLVING STRATEGIES

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( [link] ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( [link] ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

A four column by four row Sudoku puzzle is shown. The top left cell contains the number 3. The top right cell contains the number 2. The bottom right cell contains the number 1. The bottom left cell contains the number 4. The cell at the intersection of the second row and the second column contains the number 4. The cell to the right of that contains the number 1. The cell below the cell containing the number 1 contains the number 2. The cell to the left of the cell containing the number 2 contains the number 3.

How long did it take you to solve this sudoku puzzle? (You can see the answer at the end of this section.)

Here is another popular type of puzzle ( [link] ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

A square shaped outline contains three rows and three columns of dots with equal space between them.

Did you figure it out? (The answer is at the end of this section.) Once you understand how to crack this puzzle, you won’t forget.

Take a look at the “Puzzling Scales” logic puzzle below ( [link] ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”

PITFALLS TO PROBLEM SOLVING

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Link to Learning

Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in [link] .

Please visit this site to see a clever music video that a high school teacher made to explain these and other cognitive biases to his AP psychology students.

Were you able to determine how many marbles are needed to balance the scales in [link] ? You need nine. Were you able to solve the problems in [link] and [link] ? Here are the answers ( [link] ).

The first puzzle is a Sudoku grid of 16 squares (4 rows of 4 squares) is shown. Half of the numbers were supplied to start the puzzle and are colored blue, and half have been filled in as the puzzle’s solution and are colored red. The numbers in each row of the grid, left to right, are as follows. Row 1:  blue 3, red 1, red 4, blue 2. Row 2: red 2, blue 4, blue 1, red 3. Row 3: red 1, blue 3, blue 2, red 4. Row 4: blue 4, red 2, red 3, blue 1.The second puzzle consists of 9 dots arranged in 3 rows of 3 inside of a square. The solution, four straight lines made without lifting the pencil, is shown in a red line with arrows indicating the direction of movement. In order to solve the puzzle, the lines must extend beyond the borders of the box. The four connecting lines are drawn as follows. Line 1 begins at the top left dot, proceeds through the middle and right dots of the top row, and extends to the right beyond the border of the square. Line 2 extends from the end of line 1, through the right dot of the horizontally centered row, through the middle dot of the bottom row, and beyond the square’s border ending in the space beneath the left dot of the bottom row. Line 3 extends from the end of line 2 upwards through the left dots of the bottom, middle, and top rows. Line 4 extends from the end of line 3 through the middle dot in the middle row and ends at the right dot of the bottom row.

Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.

Self Check Questions

Critical thinking questions.

1. What is functional fixedness and how can overcoming it help you solve problems?

2. How does an algorithm save you time and energy when solving a problem?

Personal Application Question

3. Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

1. Functional fixedness occurs when you cannot see a use for an object other than the use for which it was intended. For example, if you need something to hold up a tarp in the rain, but only have a pitchfork, you must overcome your expectation that a pitchfork can only be used for garden chores before you realize that you could stick it in the ground and drape the tarp on top of it to hold it up.

2. An algorithm is a proven formula for achieving a desired outcome. It saves time because if you follow it exactly, you will solve the problem without having to figure out how to solve the problem. It is a bit like not reinventing the wheel.

  • Psychology. Authored by : OpenStax College. Located at : http://cnx.org/contents/[email protected]:1/Psychology . License : CC BY: Attribution . License Terms : Download for free at http://cnx.org/content/col11629/latest/.

Footer Logo Lumen Candela

Privacy Policy

STUDYMAT

Algorithms and Heuristics as Strategies of Problem Solving.

Algorithms and Heuristics as Strategies of Problem Solving

  • Post category: IGNOU / MAPC / MPC 001
  • Post comments: 0 Comments
  • Reading time: 4 mins read

In this article, we will Explain Algorithms and Heuristics as Strategies of Problem Solving.

Explain Algorithms and Heuristics as Strategies of Problem Solving.

Even with all the necessary knowledge and skills, success in problem-solving is not guaranteed.

In addition to knowledge and skills, problem-solving requires a general strategy.

A strategy is a series of steps used to solve a problem efficiently by extracting relevant data and providing a planned approach.

Cognitive psychologists identify two main types of strategies: algorithms and heuristics.

Algorithms:

An algorithm is a method that ensures solving a problem correctly by following well-defined rules. For instance, in an anagram problem, an algorithm would involve trying all possible letter sequences until finding the correct and meaningful word. There are four essential properties of an algorithm:

  • Each step of an algorithm must be precise and unambiguous to eliminate any uncertainty.
  • An algorithm must eventually stop or terminate its execution to provide a result.
  • An algorithm must deliver the correct solution to the problem.
  • An algorithm must apply to all instances of the problem it addresses.

While an algorithm guarantees a solution, its execution often demands significant time and effort, making it less practical for human operators to use frequently.

Heuristics:

Heuristics are general guidelines used for problem-solving but don’t guarantee correct solutions. They offer various approaches, and if one doesn’t work, another can be tried. General heuristics apply broadly, while specific ones are for specialized fields.

Means-End Analysis is a common heuristic that breaks a problem into smaller goals. Achieving these smaller goals moves closer to the main goal, aiding problem-solving.

Working Backward starts from the goal and goes back to the initial state. It’s useful for certain problems like mazes but requires a well-defined end state.

Analogies involve using past problem-solving experiences to tackle current ones. This strategy identifies similarities between past and present problems to find solutions.

  • What are Heuristics as Strategies of Problem-Solving?
  • What is the Difference between Algorithms and Heuristics as Problem-Solving Strategies?
  • What is the Algorithm Problem-Solving Strategy?
  • What Problems can be Solved by Algorithm and by Heuristics?

If you like this article, you can Follow us on Facebook . Also, you can Join our Official Facebook Group for QnA Sessions and Discussions with the worldwide IGNOU community

Share This Share this content

  • Opens in a new window

You Might Also Like

Define behavioural assessment., characteristics and identification of learning disability., differentiate counselling from guidance., leave a reply cancel reply.

Save my name, email, and website in this browser for the next time I comment.

explain algorithms and heuristics as strategies of problem solving

Heuristic Problem Solving: A comprehensive guide with 5 Examples

What are heuristics, advantages of using heuristic problem solving, disadvantages of using heuristic problem solving, heuristic problem solving examples, frequently asked questions.

  • Speed: Heuristics are designed to find solutions quickly, saving time in problem solving tasks. Rather than spending a lot of time analyzing every possible solution, heuristics help to narrow down the options and focus on the most promising ones.
  • Flexibility: Heuristics are not rigid, step-by-step procedures. They allow for flexibility and creativity in problem solving, leading to innovative solutions. They encourage thinking outside the box and can generate unexpected and valuable ideas.
  • Simplicity: Heuristics are often easy to understand and apply, making them accessible to anyone regardless of their expertise or background. They don’t require specialized knowledge or training, which means they can be used in various contexts and by different people.
  • Cost-effective: Because heuristics are simple and efficient, they can save time, money, and effort in finding solutions. They also don’t require expensive software or equipment, making them a cost-effective approach to problem solving.
  • Real-world applicability: Heuristics are often based on practical experience and knowledge, making them relevant to real-world situations. They can help solve complex, messy, or ill-defined problems where other problem solving methods may not be practical.
  • Potential for errors: Heuristic problem solving relies on generalizations and assumptions, which may lead to errors or incorrect conclusions. This is especially true if the heuristic is not based on a solid understanding of the problem or the underlying principles.
  • Limited scope: Heuristic problem solving may only consider a limited number of potential solutions and may not identify the most optimal or effective solution.
  • Lack of creativity: Heuristic problem solving may rely on pre-existing solutions or approaches, limiting creativity and innovation in problem-solving.
  • Over-reliance: Heuristic problem solving may lead to over-reliance on a specific approach or heuristic, which can be problematic if the heuristic is flawed or ineffective.
  • Lack of transparency: Heuristic problem solving may not be transparent or explainable, as the decision-making process may not be explicitly articulated or understood.
  • Trial and error: This heuristic involves trying different solutions to a problem and learning from mistakes until a successful solution is found. A software developer encountering a bug in their code may try other solutions and test each one until they find the one that solves the issue.
  • Working backward: This heuristic involves starting at the goal and then figuring out what steps are needed to reach that goal. For example, a project manager may begin by setting a project deadline and then work backward to determine the necessary steps and deadlines for each team member to ensure the project is completed on time.
  • Breaking a problem into smaller parts: This heuristic involves breaking down a complex problem into smaller, more manageable pieces that can be tackled individually. For example, an HR manager tasked with implementing a new employee benefits program may break the project into smaller parts, such as researching options, getting quotes from vendors, and communicating the unique benefits to employees.
  • Using analogies: This heuristic involves finding similarities between a current problem and a similar problem that has been solved before and using the solution to the previous issue to help solve the current one. For example, a salesperson struggling to close a deal may use an analogy to a successful sales pitch they made to help guide their approach to the current pitch.
  • Simplifying the problem: This heuristic involves simplifying a complex problem by ignoring details that are not necessary for solving it. This allows the problem solver to focus on the most critical aspects of the problem. For example, a customer service representative dealing with a complex issue may simplify it by breaking it down into smaller components and addressing them individually rather than simultaneously trying to solve the entire problem.

Test your problem-solving skills for free in just a few minutes.

The free problem-solving skills for managers and team leaders helps you understand mistakes that hold you back.

What are the three types of heuristics?

What are the four stages of heuristics in problem solving.

Other Related Blogs

conflict mediation

Top 15 Tips for Effective Conflict Mediation at Work

Top 10 games for negotiation skills to make you a better leader, manager effectiveness: a complete guide for managers in 2024, 5 proven ways managers can build collaboration in a team.

explain algorithms and heuristics as strategies of problem solving

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 17 February 2023

A brief history of heuristics: how did research on heuristics evolve?

  • Mohamad Hjeij   ORCID: orcid.org/0000-0003-4231-1395 1 &
  • Arnis Vilks 1  

Humanities and Social Sciences Communications volume  10 , Article number:  64 ( 2023 ) Cite this article

11k Accesses

5 Citations

2 Altmetric

Metrics details

Heuristics are often characterized as rules of thumb that can be used to speed up the process of decision-making. They have been examined across a wide range of fields, including economics, psychology, and computer science. However, scholars still struggle to find substantial common ground. This study provides a historical review of heuristics as a research topic before and after the emergence of the subjective expected utility (SEU) theory, emphasising the evolutionary perspective that considers heuristics as resulting from the development of the brain. We find it useful to distinguish between deliberate and automatic uses of heuristics, but point out that they can be used consciously and subconsciously. While we can trace the idea of heuristics through many centuries and fields of application, we focus on the evolution of the modern notion of heuristics through three waves of research, starting with Herbert Simon in the 1950s, who introduced the notion of bounded rationality and suggested the use of heuristics in artificial intelligence, thereby paving the way for all later research on heuristics. A breakthrough came with Daniel Kahneman and Amos Tversky in the 1970s, who analysed the biases arising from using heuristics. The resulting research programme became the subject of criticism by Gerd Gigerenzer in the 1990s, who argues that an ‘adaptive toolbox’ consisting of ‘fast-and-frugal’ heuristics can yield ‘ecologically rational’ decisions.

Similar content being viewed by others

explain algorithms and heuristics as strategies of problem solving

Worldwide divergence of values

explain algorithms and heuristics as strategies of problem solving

The language network as a natural kind within the broader landscape of the human brain

explain algorithms and heuristics as strategies of problem solving

Artificial intelligence and illusions of understanding in scientific research

Introduction.

Over the past 50 years, the notion of ‘heuristics’ has considerably gained attention in fields as diverse as psychology, cognitive science, decision theory, computer science, and management scholarship. While for 1970, the Scopus database finds a meagre 20 published articles with the word ‘heuristic’ in their title, the number has increased to no less than 3783 in 2021 (Scopus, 2022 ).

We take this to be evidence that many researchers in the aforementioned fields find the literature that refers to heuristics stimulating and that it gives rise to questions that deserve further enquiry. While there are some review articles on the topic of heuristics (Gigerenzer and Gaissmaier, 2011 ; Groner et al., 1983 ; Hertwig and Pachur, 2015 ; Semaan et al., 2020 ), a somewhat comprehensive and non-partisan historical review seems to be missing.

While interest in heuristics is growing, the very notion of heuristics remains elusive to the point that, e.g., Shah and Oppenheimer ( 2008 ) begin their paper with the statement: ‘The word “heuristic” has lost its meaning.’ Even if one leaves aside characterizations such as ‘rule of thumb’ or ‘mental shortcut’ and considers what Kahneman ( 2011 ) calls ‘the technical definition of heuristic,’ namely ‘a simple procedure that helps find adequate, though often imperfect, answers to difficult questions,’ one is immediately left wondering how simple it has to be, what an adequate, but the imperfect answer is, and how difficult the questions need to be, in order to classify a procedure as a heuristic. Shah and Oppenheimer conclude that ‘the term heuristic is vague enough to describe anything’.

However, one feature does distinguish heuristics from certain other, typically more elaborate procedures: heuristics are problem-solving methods that do not guarantee an optimal solution. The use of heuristics is, therefore, inevitable where no method to find an optimal solution exists or is known to the problem-solver, in particular where the problem and/or the optimality criterion is ill-defined. However, the use of heuristics may be advantageous even where the problem to be solved is well-defined and methods do exist which would guarantee an optimal solution. This is because definitions of optimality typically ignore constraints on the process of solving the problem and the costs of that process. Compared to infallible but elaborate methods, heuristics may prove to be quicker or more efficient.

Nevertheless, the range of what has been called heuristics is very broad. Application of a heuristic may require intuition, guessing, exploration, or experience; some heuristics are rather elaborate, others are truly shortcuts, some are described in somewhat loose terms, and others are well-defined algorithms.

One procedure of decision-making that is commonly not regarded as a heuristic is the application of the full-blown theory of subjective expected utility (SEU) in the tradition of Ramsey ( 1926 ), von Neumann and Morgenstern ( 1944 ), and Savage ( 1954 ). This theory is arguably spelling out what an ideally rational decision would be, but was already seen by Savage (p. 16) to be applicable only in what he called a ‘small world’. Quite a few approaches that have been called heuristics have been explicitly motivated by SEU imposing demands on the decision-maker, which are utterly impractical (cf., e.g., Klein, 2001 , for a discussion). As a second defining feature of the heuristics we want to consider, therefore, we take them to be procedures of decision-making that differ from the ‘gold standard’ of SEU by being practically applicable in at least a number of interesting cases. Along with SEU, we also leave aside the rules of deductive logic, such as Aristotelian syllogisms, modus ponens, modus tollens, etc. While these can also be seen as rules of decision-making, and the universal validity of some of them is not quite uncontroversial (see, e.g., Priest, 2008 , for an introduction to non-classical logic), they are widely regarded as ‘infallible’. By stark contrast, it seems characteristic for heuristics that their application may fail to yield a ‘best’ or ‘correct’ result.

By taking heuristics to be practically applicable, but fallible, procedures for problem-solving, we will also neglect the literature that focuses on the adjective ‘heuristic’ instead of on the noun. When, e.g., Suppes ( 1983 ) characterizes axiomatic analyses as ‘heuristic’, he is not suggesting any rule, but he is saying that heuristic axioms ‘seem intuitively to organize and facilitate our thinking about the subject’ (p. 82), and proceeds to give examples of both heuristic and nonheuristic axioms. It may of course be said that many fundamental equations in science, such as Newton’s force = mass*acceleration, have some heuristic value in the sense indicated by Suppes, but the research we will review is not about the property of being heuristic.

Given that heuristics can be assessed against the benchmark of SEU, one may distinguish broadly between heuristics suggested pre-SEU, i.e., before the middle of the 20th century, and the later research on heuristics that had to face the challenge of an existing theory of allegedly rational decision-making. We will review the former in the section “Deliberate heuristics—the art of invention” below, and devote sections “Herbert Simon: rationality is bounded”, “Heuristics in computer science” and “Daniel Kahneman and Amos Tversky: heuristics and biases” to the latter.

To cover the paradigmatic cases of what has been termed ‘heuristics’ in the literature, we have to take ‘problem-solving’ in a broad sense that includes decision-making and judgement, but also automatic, instinctive behaviour. We, therefore, feel that an account of research on heuristics should also review the main views on how observable behaviour patterns in humans—or maybe animals in general—can be explained. This we do in the section “Automatic heuristics: learnt or innate?”.

While our brief history cannot aim for completeness, we selected the scholars to be included based on their influence and contributions to different fields of research related to heuristics. Our focus, however, will be on the more recent research that may be said to begin with Herbert Simon.

That problem-solving according to SEU will, in general, be impractical, was clearly recognized by Herbert Simon, whose notion of bounded rationality we look at in the section “Herbert Simon: rationality is bounded”. In the section “Heuristics in computer science”, we also consider heuristics in computer science, where the motivation to use heuristics is closely related to Simon’s reasoning. In the section “Daniel Kahneman and Amos Tversky: heuristics and biases”, we turn to the heuristics identified and analysed by Kahneman and Tversky; while their assessment was primarily that the use of those heuristics often does not conform to rational decision-making, the approach by Gigerenzer and his collaborators, reviewed in the section “Gerd Gigerenzer: fast-and-frugal heuristics” below, takes a much more affirmative view on the use of heuristics. Section “Critiques” explains the limitations and critiques of the corresponding ideas. The final section “Conclusion” contains the conclusion, discussion, and avenues for future research.

The evolutionary perspective

While we focus on the history of research on heuristics, it is clear that animal behaviour patterns evolved and were shaped by evolutionary forces long before the human species emerged. Thus ‘heuristics’ in the mere sense of behaviour patterns have been used long before humans engaged in any kind of conscious reflection on decision-making, let alone systematic research. However, evolution endowed humans with brains that allow them to make decisions in ways that are quite different from animal behaviour patterns. According to Gibbons ( 2007 ), the peculiar evolution of the human brain started thousands of years ago when the ancient human discovered fire and started cooking food, which reduced the amount of energy the body needed for digestion. This paved the way for a smaller intestinal tract and implied that the excess calories led to the development of larger tissues and eventually a larger brain. Through this organ, intelligence increased exponentially, resulting in advanced communication that allowed Homo sapiens to collaborate and form relationships that other primates at the time could not match. According to Dunbar ( 1998 ), it was in the time between 400,000 and 100,000 years ago that abilities to hunt more effectively took humans from the middle of the food chain right to the top.

It does not seem to be known when and how exactly the human brain developed the ability to reflect on decisions made consciously, but it is now widely recognized that in addition to the fast, automatic, and typically nonconscious type of decision-making that is similar to animal behaviour, humans also employ another, rather a different type of decision-making that can be characterized as slow, conscious, controlled, and reflective. The former type is known as ‘System 1’ or ‘the old mind’, and the latter as ‘System 2’ or ‘the new mind’ (Evans, 2010 ; Kahneman, 2011 ), and both systems have clearly evolved side by side throughout the evolution of the human brain. According to Gigerenzer ( 2021 ), humans as well as other organisms evolved to acquire what he calls ‘embodied heuristics’ that can be both innate or learnt rules of thumb, which in turn supply the agility to respond to the lack of information by fast judgement. The ‘embodied heuristics’ use the mental capacity that includes the motor and sensory abilities that start to develop from the moment of birth.

While a detailed discussion of the ‘dual-process theories’ of the mind is beyond the scope of this paper, we find it helpful to point out that one may distinguish between ‘System 1 heuristics’ and ‘System 2 heuristics’ (Kahneman 2011 , p. 98). While some ‘rules of decision-making’ may be hard-wired into the human species by its genes and physiology, others are complicated enough that their application typically requires reflection and conscious mental effort. Upon reflection, however, the two systems are not as separate as they may seem. For example, participants in the Mental Calculation World Cup perform mathematical tasks instantly, whereas ordinary people would need a pen and paper or a calculator. Today, many people cannot multiply large numbers or calculate a square root using only a pen and paper but can easily do this using the calculator app on their smartphone. Thus, what can be done by spontaneous effortless calculation by some, may for others require the application of a more or less complicated theory.

Nevertheless, one can loosely characterize the heuristics that have been explained and recommended for more or less well-specified purposes over the course of history as System 2 or deliberate heuristics.

Deliberate heuristics—the art of invention

Throughout history, scholars have investigated methods to solve complex tasks. In this section, we review those attempts to formulate ‘operant and voluntary’ heuristics to solve demanding problems—in particular, to generate new insights or do research in more or less specified fields. Most of the heuristics in this section have been suggested before the emergence of the SEU theory and the associated modern definition of rationality, and none of them deals with the kind of decision problems that are assumed as ‘given’ in the SEU model. The reader will notice that some historical heuristics were suggested for problems that, today, may seem too general to be solved. However, through the development of such attempts, later scholars were inspired to develop a more concrete understanding of the notion of heuristics.

The Greek origin

The term heuristic originates from the Greek verb heurísko , which means to discover or find out. The Greek word heúrēka , allegedly exclaimed by Archimedes when discovering how to measure the volume of a random object through water, derives from the same verb and can be translated as I found it! (Pinheiro and McNeill, 2014 ). Heuristics can thus be said to be etymologically related to the discipline of discovery, the branch of knowledge based on investigative procedures, and are naturally associated with trial techniques, including what-if scenarios and simple trial and error.

While the term heurísko does not seem to be used in this context by Aristotle, his notion of induction ( epagôgê ) can be seen as a method to find, but not prove, true general statements and thus as a heuristic. At any rate, Aristotle considered inductive reasoning as leading to insights and as distinct from logically valid syllogisms (Smith, 2020 ).

Pappus (4th century)

While a brief, somewhat cryptic, mention of analysis and synthesis appears in Book 13 of some, but not all, editions of Euclid’s Elements, a clearer explanation of the two methods was given in the 4th century by the Greek mathematician and astronomer Pappus of Alexandria (cf. Heath, 1926 ; Polya, 1945 ; Groner et al., 1983 ). While synthesis is what today would be called deduction from known truths, analysis is a method that can be used to try and find proof. Two slightly different explanations are given by Pappus. They boil down to this: in order to find proof for a statement A, one can deduce another statement B from A, continue by deducing yet another statement C from B, and so on, until one comes upon a statement T that is known to be true. If all the inferences are convertible, the converse deductions evidently constitute a proof of A from T. While Pappus did not mention the condition that the inferences must be convertible, his second explanation of analysis makes it clear that one must be looking for deductions from A which are both necessary and sufficient for A. In Polya’s paraphrase of Pappus’ text: ‘We enquire from what antecedent the desired result could be derived; then we enquire again what could be the antecedent of that antecedent, and so on, until passing from antecedent to antecedent, we come eventually upon something already known or admittedly true.’ Analysis thus described is hardly a ‘shortcut’ or ‘rule of thumb’, but quite clearly it is a heuristic: it may help to find a proof of A, but it may also fail to do so…

Al-Khawarizmi (9th century)

In the 9th century, the Persian thinker Mohamad Al-Khawarizmi, who resided in Baghdad’s centre of knowledge or the House of Wisdom , used stepwise methods for problem-solving. Thus, after his name and findings, the algorithm concept was derived (Boyer, 1991 ). Although a heuristic orientation has sometimes been contrasted with an algorithmic one (Groner and Groner, 1991 ), it is worth noting that an algorithm may well serve as a heuristic—certainly in the sense of a shortcut, and also in the sense of a fallible method. After all, an algorithm may fail to produce a satisfactory result. We will return to this issue in the section “Heuristics in computer science” below.

Zairja (10th century)

Heuristic methods were created by medieval polymaths in their attempts to find solutions for the complex problems they faced—science not yet being divorced from what today would appear as theology or astrology. Perhaps the first tangible example of a heuristic based on a mechanical device was using an ancient tool called a zairja , which Arab astrologers employed before the 11th century (Ritchey, 2022 ). It was designed to reconfigure notions into ideas through randomization and resonance and thus to produce answers to questions mechanically (Link, 2010 ). The word zairja may have originated from the Persian combination zaicha-daira , which means horoscope-circle. According to Ibn Khaldoun, ‘zairja is the technique of finding out answers from questions by means of connections existing between the letters of the expressions used in the question; they imagine that these connections can form the basis for knowing the future happenings they want to know’ (Khaldun, 1967 ).

Ramon Llull (1305)

The Majorcan philosopher Ramon Llull (or Raimundus Lullus), who was exposed to the Arabic culture, used the zairja as a starting point for his ars inveniendi veritatem that was meant to complement the ars demonstrandi of medieval Scholastic logic and on which he worked from around 1270–1305 (Link, 2010 ; Llull, 1308 ; Ritchey, 2022 ) when he finished his Ars Generalis Ultima (or Ars Magna ). Llull transformed the astrological and combinatorial components of the zairja into a religious system that took the fundamental ideas of the three Abrahamic faiths of Islam, Christianity, and Judaism and analysed them through symbolic and numeric reasoning. Llull tried to broaden his theory across all fields of knowledge and combine all sciences into a single science that would address all human problems. His thoughts impacted great thinkers, such as Leibniz, and even the modern theory of computation (Fidora and Sierra, 2011 ). Llull’s approach may be considered a clear example of heuristic methods applied to complicated and even theological questions (Hertwig and Pachur, 2015 ).

Joachim Jungius (1622)

Arguably, the German mathematician and philosopher Joachim Jungius was the first to use the terminology heuretica in a call to establish a research society in 1622. Jungius distinguished between three degrees or levels of learning and cognition: empirical, epistemic, and heuristic. Those who have reached the empirical level believe that what they have learned is true because it corresponds to experience. Those who have reached the epistemic level know how to derive their knowledge from principles with rigorous evidence. But those who have reached the highest level, the heuristic level, have a method of solving unsolved problems, finding new theorems, and introducing new methods into science (Ritter et al., 2017 ).

René Descartes (1637)

In 1637, the French philosopher René Descartes published his Discourse on Method (one of the first major works not written in Latin). Descartes argued that humans could utilize mathematical reasoning as a vehicle for progress in knowledge. He proposed four simple steps to follow in problem-solving. First, to accept as true only what is indubitable. Next, divide the problem into as many smaller subproblems as possible and helpful. After that, to conduct one’s thoughts in an orderly fashion, beginning with the simplest and gradually ascending to the most complex. And finally, to make enumerations so complete that one is assured of having omitted nothing (Descartes, 1998 ). In reference to his other methods, Descartes ( 1908 ) started working on the proper heuristic rules to transform every problem, when possible, into algebraic equations, thus creating a mathesis universalis or universal science. In his unfinished ‘Rules for the Direction of the Mind’ or Regulae ad directionem ingenii , Descartes suggested 21 heuristic rules (of planned 36) for scientific research like simplifying the problem, rewriting the problem in geometrical shape, and identifying the knowns and the unknowns. Although Leibniz criticized the rules of Descartes for being too general (Leibniz, 1880 ), this treatise outlined the basis for later work on complex problems in several disciplines.

Gottfried Wilhelm Leibniz (1666)

Influenced by the ideas of Llull, Jungius, and Descartes, the Prussian–German polymath Gottfried Wilhelm Leibniz suggested an original approach to problem-solving in his Dissertatio de Arte Combinatoria , published in Leipzig in 1666. His aim was to create a new universal language into which all problems could be translated and a standard solving procedure that could be applied regardless of the type of the problem. Leibniz also defined an ars inveniendi as a method for finding new truths, distinguishing it from an ars iudicandi , a method to evaluate the validity of alleged truths. Later, in 1673, he invented the calculating machine that could execute all four arithmetic operations and thus find ‘new’ arithmetic truths (Pombo, 2002 ).

Bernard Bolzano ( 1837 )

In 1837, the Czech mathematician and philosopher Bernard Bolzano published his four-volume Wissenschaftslehre (Theory of Science). The fourth part of his theory he called ‘Erfindungskunst’ or the art of invention, mentions in the introductory section 322 that ‘heuristic’ is just the Greek translation. Bolzano explains that the rules he is going to state are not at all entirely new, but instead have always been used ‘by the talented’—although mostly not consciously. He then explains 13 general and 33 special rules one should follow when trying to find new truths. Among the general rules are, e.g., that one should first decide on the question one wants to answer, and the kind of answer one is looking for (section 325), or that one should choose suitable symbols to represent one’s ideas (section 334). Unlike the general rules, the special ones are meant to be helpful for special mental tasks only. E.g., in order to solve the task of finding the reason for any given truth, Bolzano advises first to analyse or dissect the truth into its parts and then use those to form truths which are simpler than the given one (section 378). Another example is Bolzano’s special rule 28, explained in section 386, which is meant to help identify the intention behind a given action. To do so, Bolzano advises exploring the agent’s beliefs about the effects of his action at the time he decided to act, and explains that this will require investigating the agent’s knowledge, his degree of attention and deliberation, any erroneous beliefs the agent may have had, and ‘many other circumstances’. Bolzano continues to point out that any effect the agent may have expected to result from his action will not be an intended one if he considered it neither as an obligation nor as advantageous. While Bolzano’s rules can hardly be considered as ‘shortcuts’, he mentions again and again that they may fail to solve the task at hand adequately (cf. Hertwig and Pachur, 2015 ; Siitonen, 2014 ).

Frank Ramsey ( 1926 )

In Ramsey’s pathbreaking paper on ‘Truth and Probability’ which laid the foundation of subjective probability theory, a final section that has received little attention in the literature is devoted to inductive logic. While he does not use the word ‘heuristic’, he characterizes induction as a ‘habit of the mind,’ explaining that he uses ‘habit in the most general possible sense to mean simply rule or the law of behaviour, including instinct,’ but also including ‘acquired rules.’ Ramsey gives the following pragmatic justification for being convinced by induction: ‘our conviction is reasonable because the world is so constituted that inductive arguments lead on the whole to true opinions,’ and states more generally that ‘we judge mental habits by whether they work, i.e., whether the opinions they lead to are for the most part true, or more often true than those which alternative habits would lead to’ (Ramsey, 1926 ). In modern terminology, Ramsey was pointing out that mental habits—such as inductive inference—may be more or less ‘ecologically rational’.

Karl Duncker ( 1935 )

Karl Duncker was a pioneer in the experimental investigation of human problem-solving. In his 1935 book Zur Psychologie des produktiven Denkens , he discussed both heuristics that help to solve problems, but also hindrances that may block the solution of a problem—and reported on a number of experimental findings. Among the heuristics was a situational analysis with the aim of uncovering the reasons for the gap between the status quo and the problem-solvers goal, analysis of the goal itself, and of sacrifices the problem-solver is willing to make, of prerequisites for the solution, and several others. Among the hindrances to problem-solving was what Duncker called functional fixedness, illustrated by the famous candle problem, in which he asked the participants to fix a candle to the wall and light it without allowing the wax to drip. The available tools were a candle, matches, and a box filled with thumbtacks. The solution was to empty the box of thumbtacks, fix the empty box to the wall using the thumbtacks, put the candle in the box, and finally light the candle. Participants who were given the empty box as a separate item could solve this problem, while those given the box filled with thumbtacks struggled to find a solution. Through this experiment, Duncker illustrated an inability to think outside the box and the difficulty in using a device in a way that is different from the usual one (Glaveanu, 2019 ). Duncker emphasized that success in problem-solving depends on a complementary combination of both the internal mind and the external problem structure (cf. Groner et al., 1983 ).

George Polya ( 1945 )

The Hungarian mathematician George Polya can be aptly called the father of problem-solving in modern mathematics and education. In his 1945 book, How to Solve it , Polya writes that ‘heuristic…or ‘ ars inveniendi ’ was the name of a certain branch of study…often outlined, seldom presented in detail, and as good as forgotten today’ and he attempts to ‘revive heuristic in a modern and modest form’. According to his four principles of mathematical problem-solving, it is first necessary to understand the problem, then plan the execution, carry out the plan, and finally, reflect and search for improvement opportunities. Among the more detailed suggestions for problem-solving explained by Polya are to ask questions such as ‘can you find the solution to a similar problem?’, to use inductive reasoning and analogy, or to choose a suitable notation. Procedures inspired by Polya’s ( 1945 ) book and several later ones (e.g., Induction and Analogy in Mathematics of 1954 ) also informed the field of artificial intelligence (AI) (Hertwig and Pachur, 2015 ).

Johannes Müller (1968)

In 1968, the German scientist Johannes Müller introduced the concept of systematic heuristics while working on his postdoctoral thesis at the Chemnitz University of Technology. Systematic heuristics is a framework for improving the efficiency of intellectual work using problem-solving processes in the fields of science and technology.

The main idea of systematic heuristics is to solve repeated problems with previously validated solutions. These methods are called programmes and are gathered in a library that can be accessed by the main programme, which receives the requirements, prepares the execution plan, determines the required procedures, executes the plan, and finally evaluates the results. Müller’s team was dismissed for ideological reasons, and his programme was terminated after a few years, but his findings went on to be successfully applied in many projects across different industries (Banse and Friedrich, 2000 ).

Imre Lakatos ( 1970 )

In his ‘Methodology of Scientific Research Programmes’ that turned out to be a major contribution to the Popper–Kuhn controversy about the rationality of non-falsifiable paradigms in the natural sciences, Lakatos introduced the interesting distinction between a ‘negative heuristic’ that is given by the ‘hard core’ of a research programme and the ‘positive heuristic’ of the ‘protective belt’. While the latter suggests ways to develop the research programme further and to predict new facts, the ‘hard core’ of the research programme is treated as irrefutable ‘by the methodological decision of its protagonists: anomalies must lead to changes only in the ‘protective’ belt’ of auxiliary hypotheses. The Lakatosian notion of a negative heuristic seems to have received little attention outside of the Philosophy of Science community but may be important elsewhere: when there are too many ways to solve a complicated problem, excluding some of them from consideration may be helpful.

Gerhard Kleining ( 1982 )

The German sociologist Gerhard Kleining suggested a qualitative heuristic as the appropriate research method for qualitative social science. It is based on four principles: (1) open-mindedness of the scientist who should be ready to revise his preconceptions about the topic of study, (2) openness of the topic of study, which is initially defined only provisionally and allowed to be modified in course of the research, (3) maximal variation of the research perspective, and (4) identification of similarities within the data (Kleining, 1982 , 1995 ).

Automatic heuristics: learnt or innate?

Unlike the deliberate, and in some cases quite elaborate, heuristics reviewed above, at least some System 1 heuristics are often applied automatically, without any kind of deliberation or conscious reflection on the task that needs to be performed or the question that needs to be answered. One may view them as mere patterns of behaviour, and as such their scientific examination has been a long cumulative process through different disciplines, even though explicit reference to heuristics was not often made.

Traditionally, examining the behaviour patterns of any living creature, any study concerning thoughts, feelings, or cognitive abilities was regarded as the task of biologists. However, the birth of psychology as a separate discipline paved the way for an alternative outlook. Evolutionary psychology views human behaviour as being shaped through time and experience to promote survival throughout the long history of human struggle with nature. With many factors to consider, scholars have been interested in the evolution of the human brain, patterns of behaviour, and problem-solving (Buss and Kenrick, 1998 ).

Charles Darwin (1873)

Charles Darwin himself maybe qualifies for the title of first evolutionary psychologist, as his perceptions laid the foundations for this field that would continue to grow over a century later (Ghiselin, 1973 ).

In 1873, Darwin claimed that the brain’s articulations regarding expressions and emotions have probably developed similarly to its physical traits (Baumeister and Vohs, 2007 ). He acknowledged that personal demonstrations or expressions have a high capacity for interaction with different peers from the same species. For example, an aggressive look flags an eagerness to battle yet leaves the recipient with the option of retreating without either party being harmed. Additionally, Darwin, as well as his predecessor Lamarck, constantly emphasized the role of environmental factors in ‘the struggle for existence’ that could shape the organism’s traits in response to changes in their corresponding environments (Sen, 2020 ). The famous example of giraffes that grew long necks in response to trees growing taller is an illustration of a major environmental effect. Similarly, cognitive skills, including heuristics, must have also been shaped by the environments to evolve and keep humans surviving and reproducing.

Darwin’s ideas impacted the early advancement of brain science, psychology, and all related disciplines, including the topic of cognitive heuristics (Smulders, 2009 ).

William James (1890)

A few years later, in 1890, the father of American psychology, William James, introduced the notion of evolutionary psychology in his 1200-page text The Principles of Psychology , which later became a reference on the subject and helped establish psychology as a science. In its core content, James reasoned that many actions of the human being demonstrate the activity of instincts, which are the evolutionary embedded inclinations to react to specific incentives in adaptive manners. With this idea, James added an important building block to the foundation of heuristics as a scientific topic.

A simple example of such hard-wired behaviour patterns would be a sneeze, the preprogrammed reaction of convulsive nasal expulsion of air from the lungs through the nose and mouth to remove irritants (Baumeister and Vohs, 2007 ).

Ivan Pavlov (1897)

Triggered by scientific curiosity or the instinct for research, as he called it, the first Russian Nobel laureate, Ivan Pavlov, introduced classical conditioning, which occurs when a stimulus is used that has a predictive relationship with a reinforcer, resulting in a change in response to the stimulus (Schreurs, 1989 ). This learning process was demonstrated through experiments conducted with dogs. In the experiments, a bell (a neutral stimulus) was paired with food (a potent stimulus), resulting ultimately in the dogs salivating at the ringing of the bell—a conditioned response. Pavlov’s experiments remain paradigmatic cases of the emergence of behaviour patterns through association learning.

William McDougall (1909)

At the start of the 20th century, the Anglo-American psychologist William McDougall was one of the first to write about the instinct theory of motivation. McDougall argued that instincts trigger many critical social practices. He viewed instincts as extremely sophisticated faculties in which specific provocations such as social impediments can drive a person’s state of mind in a particular direction, for example, towards a state of hatred, envy, or anger, which in turn may increase the probability of specific practices such as hostility or violence (McDougall, 2015 ).

However, in the early 1920s, McDougall’s perspective about human behaviour being driven by instincts faded remarkably as scientists supporting the concept of behaviourism started to get more attention with original ideas (Buss and Kenrick, 1998 ).

John B. Watson (1913)

The pioneer of the psychological school of behaviourism, John B. Watson, who conducted the controversial ‘Little Albert’ experiment by imposing a phobia on a child to evidence classical conditioning in humans (Harris, 1979 ), argued against the ideas of McDougall, even within public debates (Stephenson, 2003 ). Unlike McDougall, Watson considered the brain an empty page ( tabula rasa as described by Aristotle). According to him. all personality traits and behaviours directly result from the accumulated experience that starts from birth. Thus, the story of the human mind is a continuous writing process featured by surrounding events and factors. This perception was supported in the following years of the 20th century by anthropologists who revealed many very different social standards in different societies, and numerous social researchers argued that the wide variety of cross-cultural differences should lead to the conclusion that there is no mental content built-in from birth, and that all knowledge, therefore, comes from individual experience or perception (Farr, 1996 ). In stark contrast to McDougall, Watson suggested that human intuitions and behaviour patterns are the product of a learning process that starts blank.

B. F. Skinner (1938)

Inspired by the work of Pavlov, the American psychologist B.F. Skinner took the classical conditioning approach to a more advanced level by modifying a key aspect of the process. According to Skinner, human behaviour is dependent on the outcome of past activities. If the outcome is bad, the action will probably not be repeated; however, if the outcome is good, the likelihood of the activity being repeated is relatively high. Skinner called this process reinforcement learning (Schacter et al., 2011 ). Based on reinforcement learning, Skinner also introduced the concept of operant conditioning, a type of associative learning process through which the strength of a behaviour is adjusted by reinforcement or punishment. Considering, for example, a parent’s response to a child’s behaviour, the probability of the child repeating an action will be highly dependent on the parent’s reaction (Zilio, 2013 ). Effectively, Skinner argues that the intuitive System 1 may get edited and that a heuristical cue may become more or less ‘hard-wired’ in the subject’s brain as a stimulus leading to an automatic response.

The DNA and its environment (1953 onwards)

Today, there seems to be wide agreement that behaviour patterns in humans and other species are to some extent ‘in the DNA’, the structure of which was discovered by Francis Crick and James Watson in 1953, but that they also to some extent depend on ‘the environment’—including the social environment in which the agent lives and has problems to solve. Today, it seems safe to say, therefore, that the methods of problem-solving that humans apply are neither completely innate nor completely the result of environmental stimuli—but rather the product of the complex interaction between genes and the environment (Lerner, 1978 ).

Herbert Simon: rationality is bounded

Herbert Simon is well known for his contributions to several fields, including economics, psychology, computer science, and management. Simon proposed a remarkable theory that led him to be awarded the Nobel Prize for Economics in 1978.

Bounded rationality and satisficing

In the mid-1950s, Simon published A Behavioural Model of Rational Choice, which focused on bounded rationality: the idea that people must make decisions with limited time, mental resources, and information (Simon, 1955 ). He clearly states the triangle of limitations in every decision-making process—the availability of information, time, and cognitive ability (Bazerman and Moore, 1994 ). The ideas of Simon are considered an inspiring foundation for many technologies in use today.

Instead of conforming to the idea that economic behaviour can be seen as rational and dependent on all accessible data (i.e., as optimization), Simon suggested that the dynamics of decision-making were essentially ‘satisficing,’ a notion synthesized from ‘satisfy’ and ‘suffice’ (Byron, 1998 ). During the 1940s, scholars noticed the frequent failure of two assumptions required for ‘rational’ decision-making. The first is that data is never enough and may be far from perfect, while people dependably make decisions based on incomplete data. Second, people do not assess every feasible option before settling on a decision. This conduct is highly correlated with the cost of data collection since data turns out to be progressively harder and costlier to accumulate. Rather than trying to find the ideal option, people choose the first acceptable or satisfactory option they find. Simon described this procedure as satisficing and concluded that the human brain in the decision-making process would, at best, exhibit restricted abilities (Barros, 2010 ).

Since people can neither obtain nor process all the data needed to make a completely rational decision, they use the limited data they possess to determine an outcome that is ‘good enough’—a procedure later refined into the take-the-best heuristic. Simon’s view that people are bounded by their cognitive limits is usually known as the theory of bounded rationality (cf. Gigerenzer and Selten, 2001 ).

Herbert Simon and AI

With the cooperation of Allen Newell of the RAND Corporation, Simon attempted to create a computer simulator for human decision-making. In 1956, they created a ‘thinking’ machine called the ‘Logic Theorist’. This early smart device was a computer programme with the ability to prove theorems in symbolic logic. It was perhaps the first man-made programme that simulated some human reasoning abilities to solve actual problems (Gugerty, 2006 ). After a few years, Simon, Newell, and J.C. Shaw proposed the General Problem Solver or GPS, the first AI-based programme ever invented. They actually aimed to create a single programme that could solve all problems with the same unified algorithm. However, while the GPS was efficient with sufficiently well-structured problems like the Towers of Hanoi (a puzzle with 3 rods and different-sized disks to be moved), it could not solve real-life scenarios with all their complexities (A. Newell et al., 1959 ).

By 1965, Simon was confident that ‘machines will be capable of doing any work a man can do’ (Vardi, 2012 ). Therefore, Simon dedicated most of the remainder of his career to the advancement of machine intelligence. The results of his experiments showed that, like humans, certain computer programmes make decisions using trial-and-error and shortcut methods (Frantz, 2003 ). Quite explicitly, Simon and Newell ( 1958 , p. 7) referred to heuristics being used by both humans and intelligent machines: ‘Digital computers can perform certain heuristic problem-solving tasks for which no algorithms are available… In doing so, they use processes that are closely parallel to human problem-solving processes’.

Additionally, the importance of the environment was also clearly observed in Newell and Simon’s ( 1972 ) work:

‘Just as scissors cannot cut paper without two blades, a theory of thinking and problem-solving cannot predict behaviour unless it encompasses both an analysis of the structure of task environments and an analysis of the limits of rational adaptation to task requirements’ (p. 55).

Accordingly, the term ‘task environment’ describes the formal structure of the universe of choices and results for a specific problem. At the same time, Newell and Simon do not treat the agent and the environment as two isolated entities, but rather as highly related. Consequently, they tend to believe that agents with different cognitive abilities and choice repertoires will inhabit different task environments even though their physical surroundings and intentions might be the same (Agre and Horswill, 1997 ).

Heuristics in computer science

Computer science as a discipline may have the biggest share of deliberately applied heuristics. As heuristic problem-solving has often been contrasted with algorithmic problem-solving—even by Simon and Newell ( 1958 )—it is worth recalling that the very notion of ‘algorithm’ was clarified only in the first half of the 20th century, when Alan Turing ( 1937 ) defined what was later named ‘Turing-machine’. Basically, he defined ‘mechanical’ computation as a computation that can be done by a—stylized—machine. ‘Mechanical’ being what is also known today as algorithmic, one can say that any procedure that can be performed by a digital computer is algorithmic. Nevertheless, many of them are also heuristics because an algorithm may fail to produce an optimal solution to the problem it is meant to solve. This may be so either because the problem is ill-defined or because the computations required to produce the optimal solution may not be feasible with the available resources. If the problem is ill-defined—as it often is, e.g., in natural language processing—the algorithm that does the processing has to rely on a well-defined model that does not capture the vagueness and ambiguities of the real-life problem—a problem typically stated in natural language. If the problem is well-defined, but finding the optimal solution is not feasible, algorithms that would find it may exist ‘in principle’, but require too much time or memory to be practically implemented.

In fact, there is today a rich theory of complexity classes that distinguishes between types of (well-defined) problems according to how fast the time or memory space required to find the optimal solution increases with increasing problem size. E.g., for problem types of the complexity class P, any deterministic algorithm that produces the optimal solution has a running time bounded by a polynomial function of the input size, whereas, for problems of complexity class EXPTIME, the running time is bounded by an exponential function of the input size. In the jargon of computer science, problems of the latter class are considered intractable, although the input size has to become sufficiently large before the computation of the optimal solution becomes practically infeasible (cf. Harel, 2000 ; Hopcroft et al., 2007 ). Research indicates that the computational complexity of problems can also reduce the quality of human decision-making (Bossaerts and Murawski, 2017 ).

Shortest path algorithms

A classic optimization problem that may serve to illustrate the issues of optimal solution, complexity, and heuristics goes by the name of the travelling salesman problem (TSP), which was first introduced in 1930. In this problem, several cities with given distances between each two are considered, and the goal is to find the shortest possible path through all cities and return to the starting point. For a small input size, i.e., for a small number of cities, the ‘brute-force’ algorithm is easy to use: write down all the possible paths through all the cities, calculate their lengths, and choose the shortest. However, the number of steps that are required by this procedure quickly increases with the number of cities. The TSP is today known to belong to the complexity class NP which is in between P and EXPTIME Footnote 1 ). To solve the TSP, Jon Bentley ( 1982 ) proposed the greedy (or nearest-neighbour) algorithm that will yield an acceptable result, but not necessarily the optimal one, within a relatively short time. This approach always picks the nearest neighbour as the next city to visit without regard to possible later non-optimal steps. Hence, it is considered a good-enough solution with fast results. Bentley argued that there may be better solutions, but that it approximates the optimal solution. Many other heuristic algorithms have been explored later on. There is no assurance that the solution found by a heuristic algorithm will be an ideal answer for the given problem, but it is acceptable and adequate (Pearl, 1984 ).

Heuristic algorithms of the shortest path are utilized nowadays by GPS frameworks and self-driving vehicles to choose the best route from any point of departure to any destination (for example, A* Search Algorithm). Further developed algorithms can also consider additional elements, including traffic, speed limits, and quality of roads, they may yield the shortest routes in terms of distance and the fastest ones in terms of driving time.

Computer chess

While the TSP consists of a whole set of problems which differ by the number of cities and the distances between them, determining the optimal strategy for chess is just one problem of a given size. The rules of chess make it a finite game, and Ernst Zermelo proved in 1913 that it is ‘determined’: if it were played between perfectly rational players, it would always end with the same outcome: either White always wins, or Black always wins, or it always ends with a draw (Zermelo, 1913 ). Up to the present day, it is not known which of the three is true, which points to the fact that a brute-force algorithm that would go through all possible plays of chess is practically infeasible: it would have to explore too many potential moves, and the required memory would quickly run out of space (Schaeffer et al., 2007 ). Inevitably, a chess-playing machine has to use algorithms that are ‘shortcuts’—which can be more or less intelligent.

While Simon and Newell had predicted in 1958 that within ten years the world chess champion would be a computer, it took until 1997, when a chess-playing machine developed by IBM under the name Deep Blue defeated grandmaster Garry Kasparov. Although able to analyse millions of possibilities due to their computing powers, today’s chess-playing machines apply a heuristic approach to eliminate unlikely moves and focus on those with a high probability of defeating their opponent (Newborn, 1997 ).

Machine learning

One of the main features of machine learning is the ability of the model to predict a future outcome based on past data points. Machine learning algorithms build a knowledge base similar to human experience from previous experiences in the dataset provided. From this knowledge base, the model can derive educated guesses.

A good demonstration of this is the card game Top Trumps in which the model can learn to play and keep improving to dominate the game. It does so by undertaking a learning path through a sequence of steps in which it picks two random cards from the deck and then analyses and compares them with random criteria. According to the winning result, the model iteratively updates its knowledge base in the same manner as a human, following the rule that ‘practice makes perfect.’ Hence the model will play, collect statistics, update, and iterate while becoming more accurate with each increment (Volz et al., 2016 ).

Natural language processing

In the world of language understanding, current technologies are far from perfect. However, models are becoming more reliable by the minute. When analysing and dissecting a search phrase entered into the Google search engine, a background model tries to make sense of the search criteria. Stemming words, context analysis, the affiliation of phrases, previous searches, and autocorrect/autocomplete can be applied in a heuristic algorithm to display the most relevant result in less than a second. Heuristic methods can be utilized when creating certain algorithms to understand what the user is trying to express when searching for a phrase. For example, using word affiliation, an algorithm tries to narrow down the meaning of words as much as possible toward the user’s intention, particularly when a word has more than one meaning but changes with the context. Therefore, a search for apple pie allows the algorithm to deduce that the user is highly interested in recipes and not in the technology company (Sullivan, 2002 ).

Search and big data

Search is a good example to appreciate the value of time, as one of the most important criteria is retrieving acceptable results in an acceptable timeframe. In a full search algorithm, especially in large datasets, retrieving optimal results can take a massive amount of time, making it necessary to apply heuristic search.

Heuristic search is a type of search algorithm that is used to find solutions to problems in a faster way than an exhaustive search. It uses specific criteria to guide the search process and focuses on more favourable areas of the search space. This can greatly reduce the number of nodes required to find a solution, especially for large or complex search trees.

Heuristic search algorithms work by evaluating the possible paths or states in a search tree and selecting the better ones to explore further. They use a heuristic function, which is a measure of how close a given state is to the goal state, to guide the search. This allows the algorithm to prioritize certain paths or states over others and avoid exploring areas of the search space that are unlikely to lead to a solution. The reached solution is not necessarily the best, however, a ‘good enough’ one is found within a ‘fast enough’ time. This technique is an example of a trade-off between optimality and speed (Russell et al., 2010 ).

Today, there is a rich literature on heuristic methods in computer science (Martí et al., 2018 ). As the problem to be solved may be the choice of a suitable heuristic algorithm, there are also meta-heuristics that have been explored (Glover and Kochenberger, 2003 ), and even hyper-heuristics which may serve to find or generate a suitable meta-heuristic (Burke et al., 2003 ). As Sörensen et al. ( 2018 ) point out, the term ‘metaheuristic’ may refer either to an ‘algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms’—or to a specific algorithm that is based on such a framework. E.g., a metaheuristic to find a suitable search algorithm may be inspired by the framework of biological evolution and use its ideas of mutation, reproduction and selection to produce a particular search algorithm. While this algorithm will still be a heuristic one, the fact that it has been generated by an evolutionary process indicates its superiority over alternatives that have been eliminated in the course of that process (cf. Vikhar, 2016 ).

Daniel Kahneman and Amos Tversky: heuristics and biases

Inspired by the concepts of Herbert Simon, psychologists Daniel Kahneman and Amos Tversky initiated the heuristics and biases research programme in the early 1970s, which emphasized how individuals make judgements and the conditions under which those judgements may be inaccurate (Kahneman and Klein, 2009 ).

In addition, Kahneman and Tversky emphasized information processing to elaborate on how real people with limitations can decide, choose, or estimate (Kahneman, 2011 ).

The remarkable article Judgement under Uncertainty: Heuristics and Biases , published in 1974, is considered the turning key that opened the door wide to research on this topic, although it was and still is considered controversial (Kahneman, 2011 ). In their research, Kahneman and Tversky identified three types of heuristics by which probabilities are often assessed: availability, representativeness, and anchoring and adjustment. In passing, Kahneman and Tversky mention that other heuristics are used to form non-probabilistic judgements; for example, the distance of an object may be assessed according to the clarity with which it is seen. Other researchers subsequently introduced different types of heuristics. However, availability, representativeness, and anchoring are still considered fundamental heuristics for judgements under uncertainty.

Availability

According to the psychological definition, availability or accessibility is the ease with which a specific thought comes to mind or can be inferred. Many people use this type of heuristic when judging the probability of an event that may have happened or will happen in the future. Hence, people tend to overestimate the likelihood of a rare event if it easily comes to mind because it is frequently mentioned in daily discussions (Kahneman, 2011 ). For instance, individuals overestimate their probability of being victims of a terrorist attack while the real probability is negligible. However, since terrorist attacks are highly available in the media, the feeling of a personal threat from such an attack will also be highly available during our daily life (Kahneman, 2011 ).

This concept is also present in business, as we remember the successful start-ups whose founders quit college for their dreams, such as Steve Jobs and Mark Zuckerberg, and ignore the thousands of ideas, start-ups, and founders that failed. This is because successful companies are considered a hot topic and receive broad coverage in the media, while failures do not. Similarly, broad media coverage is known to create top-of-mind awareness (TOMA) (Farris et al., 2010 ). Moreover, the availability type of heuristics was offered as a clarification for fanciful connections or irrelevant correlations in which individuals wrongly judge two events to be related to each other when they are not. Tversky and Kahneman clarified that individuals judge relationships based on the ease of envisioning the two events together (Tversky and Kahneman, 1973 ).

Representativeness

The representativeness heuristic is applied when individuals assess the probability that an object belongs to a particular class or category based on how much it resembles the typical case or prototype representing this category (Tversky and Kahneman, 1974 ). Conceptually, this heuristic can be decomposed into three parts. The first one is that the ideal case or prototype of the category is considered representative of the group. The second part judges the similarity between the object and the representative prototype. The third part is that a high degree of similarity indicates a high probability that the object belongs to the category, and a low degree of similarity indicates a low probability.

While the heuristic is often applied automatically within an instant and may be compelling in many cases, Tversky and Kahneman point out that the third part of the heuristic will often lead to serious errors or, at any rate, biases.

In particular, the representativeness heuristic can give rise to what is known as the base rate fallacy. As an example, Tversky and Kahneman consider an individual named Steve, who is described as shy, withdrawn, and somewhat pedantic, and report that people who have to assess, based on this description, whether Steve is more likely to be a librarian or a farmer, invariably consider it more likely that he is a librarian—ignoring the fact that there are many more farmers than librarians, the fact that an estimate of the probability that Steve is a librarian or a farmer, respectively, must take into account.

Another example is that a taxicab was engaged in an accident. The data indicates that 85% of the taxicabs are green and 15% blue. An eyewitness claims that the involved cab was blue. The court then evaluates the witness for reliability because he is 80% accurate and 20% inaccurate. So now, what would be the probability of the involved cab being blue, given that the witness identified it as blue as well?

To evaluate this case correctly, people should consider the base rate, 15% of the cabs being blue, and the witness accuracy rate, 80%. Of course, if the number of cabs is equally split between colours, then the only factor in deciding is the reliability of the witness, which is an 80% probability.

However, regardless of the colours’ distribution, most participants would select 80% to respond to this enquiry. Even participants who wanted to take the base rate into account estimated a probability of more than 50%, while the right answer is 41% using the Bayesian inference (Kahneman, 2011 ).

In relation to the representativeness heuristic, Kahnemann ( 2011 ) illustrated the ‘conjunction fallacy’ in the following example: based only on a detailed description of a character named Linda, doctoral students in the decision science programme of the Stanford Graduate School of Business, all of whom had taken several advanced courses in probability, statistics, and decision theory, were asked to rank various other descriptions of Linda according to their probability. Even Kahneman and Tversky were surprised to find that 85% of the students ranked Linda as a bank teller active in the feminist movement as more likely than Linda as a bank teller.

From these and many other examples, one must conclude that even sophisticated humans use the representativeness heuristic to make probability judgements without referring to what they know about probability.

Representativeness is used to make probability judgements and judgements about causality. The similarity of A and B neither indicates that A causes B nor that B causes A. Nevertheless, if A precedes B and is similar to B, it is often judged to be B’s cause.

Adjustment and anchoring

Based on Tversky and Kahneman’s interpretations, the anchor is the first available number introduced in a question forming the centre of a circle whose radius (up or down) is an acceptable range within which lies the best answer (Baron, 2000 ). This is used and tested in several academic and real-world scenarios and in business negotiations where parties anchor their prices to formulate the range of acceptance through which they can close the deal, deriving the ceiling and floor from the anchor. The impact is more dominant when parties lack time to analyse actions thoroughly.

Significantly, even if the anchor is way beyond logical boundaries, it can still bias the estimated numbers by all parties without them even realizing that it does (Englich et al., 2006 ).

In one of their experiments, Tversky and Kahneman ( 1974 ) asked participants to quickly calculate the product of numbers from 1 to 8 and others to do so from 8 to 1. Since the time was limited to 5 min, they needed to make a guess. The group that started from 1 had an average of 512, while the group that started from 8 had an average of 2250. The right answer was 40,320.

Perhaps this is one of the most unclear cognitive heuristics introduced by Kahneman and Tversky that can be interchangeably considered as a bias instead of a heuristic. The problem is that the mind tends to fixate on the anchor and adjust according to it, whether it was introduced implicitly or explicitly. Some scholars even believe that such bias/heuristic is unavoidable. For instance, in one study, participants were asked if they believed that Mahatma Gandhi died before or after nine years old versus before or after 140 years old. Unquestionably, these anchors were considered unrealistic by the audience. However, when the participants were later asked to give their estimate of Gandhi’s age of death, the group which was anchored to 9 years old speculated the average age to be 50, while the group anchored to the highest value estimated the age of death to be as high as 67 (Strack and Mussweiler, 1997 ).

Gerd Gigerenzer: fast-and-frugal heuristics

The German psychologist Gerd Gigerenzer is one of the most influential figures in the field of decision-making, with a particular emphasis on the use of heuristics. He has built much of his research on the theories of Herbert Simon and considers that Simon’s theory of bounded rationality was unfinished (Gigerenzer, 2015 ). As for Kahneman and Tversky’s work, Gigerenzer has a different approach and challenges their ideas with various arguments, facts, and numbers.

Gigerenzer explores how people make sense of their reality with constrained time and data. Since the world around us is highly uncertain, complex, and volatile, he suggests that probability theory cannot stand as the ultimate concept and is incapable of interpreting everything, particularly when probabilities are unknown. Instead, people tend to use the effortless approach of heuristics. Gigerenzer introduced the concept of the adaptive toolbox, which is a collection of mental shortcuts that a person or group of people can choose from to solve a current problem (Gigerenzer, 2000 ). A heuristic is considered ecologically rational if adjusted to the surrounding ecosystem (Gigerenzer, 2015 ).

A daring argument of Gigerenzer, which very much opposes the heuristics and biases approach of Kahneman and Tversky, is that heuristics cannot be considered irrational or inferior to a solution by optimization or probability calculation. He explicitly argues that heuristics are not gambling shortcuts that are faster but riskier (Gigerenzer, 2008 ), but points to several situations where less is more, meaning that results from frugal heuristics, which neglect some data, were nevertheless more accurate than results achieved by seemingly more elaborate multiple regression or Bayesian methods that try to incorporate all relevant data. While researchers consider this counterintuitive since a basic rule in research seems to be that more data is always better than less, Gigerenzer points out that the less-is-more effect (abbreviated as LIME) could be confirmed by computer simulations. Without denying that in some situations, the effect of using heuristics may be biased (Gigerenzer and Todd, 1999 ), Gigerenzer emphasizes that fast-and-frugal heuristics are basic, task-oriented choice systems that are a part of the decision-maker’s toolbox, the available collection of cognitive techniques for decision-making (Goldstein and Gigerenzer, 2002 ).

Heuristics are considered economical because they are easy to execute, seek limited data, and do not include many calculations. Contrary to most traditional decision-making models followed in the social and behavioural sciences, models of fast-and-frugal heuristics portray not just the result of the process but also the process itself. They comprise three simple building blocks: the search rule that specifies how information is searched for, the stopping rule that specifies when the information search will be stopped, and finally, the decision rule that specifies how the processed information is integrated into a decision (Goldstein and Gigerenzer, 2002 ).

Rather than characterizing heuristics as rules of thumb or mental shortcuts that can cause biases and must therefore be regarded as irrational, Gigerenzer and his co-workers emphasize that fast-and-frugal heuristics are often ecologically rational, even if the conjunction of them may not even be logically consistent (Gigerenzer and Todd, 1999 ).

According to Goldstein and Gigerenzer ( 2002 ), a decision maker’s pool of mental techniques may contain logic and probability theory, but it also embraces a set of simple heuristics. It is compared to a toolbox because just as a wood saw is perfect for cutting wood but useless for cutting glass or hammering a nail into a wall, the ingredients of the adaptive toolbox are intended to tackle specific scenarios.

For instance, there are specific heuristics for choice tasks, estimation tasks, and categorization tasks. In what follows, we will discuss two well-known examples of fast-and-frugal heuristics: the recognition heuristic (RH), which utilizes the absence of data, and the take-the-best heuristic (TTB), which purposely disregards the data.

Both examples of heuristics can be connected to decision assignments and to circumstances in which a decision-maker needs to decide which of two options has a higher reward on a quantitative scale.

Ideal scenarios would be deducing which one of two stock shares will have a better income in the next month, which of two cars is more convenient for a family, or who is a better candidate for a particular job (Goldstein and Gigerenzer, 2002 ).

The recognition heuristic

The recognition heuristic has been examined broadly with the famous experiment to determine which of the two cities has a higher population. This experiment was conducted in 2002, and the participants were undergraduate students: one group in the USA and one in Germany. The question was as follows: which has more occupants—San Diego or San Antonio? Given the cultural difference between the student groups and the level of information regarding American cities, it could be expected that American students would have a higher accuracy rate than their German peers. However, most German students did not even know that San Antonio is an American city (Goldstein and Gigerenzer, 2002 ). Surprisingly, the examiners, Goldstein and Gigerenzer, found the opposite of what was expected. 100% of the German students got the correct answer, while the American students achieved an accuracy rate of around 66%. Remarkably, the German students who had never known about San Antonio had more correct answers. Their lack of knowledge empowered them to utilize the recognition heuristic, which states that if one of two objects is recognized and the other is not, then infer that the recognized object has the higher value concerning the relevant criterion. The American students could not use the recognition heuristic because they were familiar with both cities. Ironically, they knew too much.

The recognition heuristic is an incredible asset. In many cases, it is used for swift decisions since recognition is usually systematic and not arbitrary. Useful applications may be cities’ populations, players’ performance in major leagues, or writers’ level of productivity. However, this heuristic will be less efficient in more difficult scenarios than a city’s population, such as the age of the city’s mayor or its sea-level altitude (Gigerenzer and Todd, 1999 ).

Take-the-best heuristic

When the recognition heuristic is not efficient because the decision-maker has enough information about both options, another important heuristic can be used that relies on hints or cues to arrive at a decision. The take-the-best (TTB) heuristic is a heuristic that relies only on specific cues or signals and does not require any complex calculations. In practice, it often boils down to a one-reason decision rule, a type of heuristic where judgements are based on a single good reason only, ignoring other cues (Gigerenzer and Gaissmaier, 2011 ). According to the TTB heuristic, a decision-maker evaluates the case by selecting the attributes which are important to him and sorts these cues by importance to create a hierarchy for the decision to be taken. Then alternatives are compared according to the first, i.e., the most important, cue; if an alternative is the best according to the first cue, the decision is taken. Otherwise, the decision-maker moves to the next layer and checks that level of cues. In other words, the decision is based on the most important attribute that allows one to discriminate between the alternatives (Gigerenzer and Goldstein, 1996 ). Although this lexicographic preference ordering is well known from traditional economic theory, it appears there mainly to provide a counterexample to the existence of a real-valued utility function (Debreu, 1959 ). Surprisingly, however, it seems to be used in many critical situations. For example, in many airports, the customs officials may decide if a traveller is chosen for a further check by looking only at the most important attributes, such as the city of departure, nationality, or luggage weight (Pachur and Marinello, 2013 ). Moreover, in 2012, a study explored voters’ views of how US presidential competitors would deal with the single issue that voters viewed as most significant, for example, the state of the economy or foreign policy. A model dependent on this attribute picked the winner in most cases (Graefe and Armstrong, 2012 ).

However, the TTB heuristic has a stopping rule applied when the search reaches a discriminating cue. So, if the most important signal discriminates, there is no need to continue searching for other cues, and only one signal is considered. Otherwise, the next most important signal will be considered. If no discriminating signal is found, the heuristic will need to make a random guess (Gigerenzer and Gaissmaier, 2011 ).

Empirical evidence on fast-and-frugal heuristics

More studies have been conducted on fast-and-frugal heuristics using analytical methods and simulations to investigate when and why heuristics yield accurate results on the one hand, and on the other hand, using experiments and observational methods to find out whether and when people use fast-and-frugal heuristics (Luan et al., 2019 ). Structured examinations and benchmarking with standard models, for example, regression or Bayesian models, have shown that the accuracy of fast-and-frugal heuristics relies upon the structure of the information environment (e.g., the distribution of signal validities, the interrelation between signals, etc.). In numerous situations, fast-and-frugal heuristics can perform well, particularly in generalized contexts, when making predictions for new cases that have not been previously experienced. Empirical examinations show that people utilize fast-and-frugal heuristics under a time constraint when data is hard to obtain or must be retrieved from memory. Remarkably, some studies have inspected how individuals adjust to various situations by learning. Rieskamp and Otto ( 2006 ) found that individuals seemingly learn to choose the heuristic that has the best performance in a specific domain. Nevertheless, Reimer and Katsikopoulos ( 2004 ) found that individuals apply fast-and-frugal heuristics when making inferences in groups.

While interest in heuristics has been increasing, some of the literature has been mostly critical. In particular, the heuristics and biases programme introduced by Kahneman and Tversky has been the target of more than one critique (Reisberg, 2013 ).

The arguments are mainly in two directions. The first is that the main focus is on the coherence standards such as rationality and that the detection of biases ignores the context-environmental factors where the judgements occur (B.R. Newell, 2013 ). The second is that notions such as availability or representativeness are vague and undefined, and state little regarding the procedures’ hidden judgements (Gigerenzer, 1996 ). For example, it has been argued that the replies in the acclaimed Linda-the-bank-teller experiment could be considered sensible instead of biased if one uses conversational or colloquial standards instead of formal probability theory (Hilton, 1995 ).

The argument of having a vague explanation for certain phenomena can be illustrated when considering the following two scenarios. People tend to believe that an opposite outcome will be achieved after having a stream of the same outcome (e.g., people tend to believe that ‘heads’ should be the next outcome in a coin-flipping game with many consecutive ‘tails’). This is called the gambler fallacy (Barron and Leider, 2010 ). By contrast, the hot-hand fallacy (Gilovich et al., 1985 ) argues that people tend to believe that a stream of the same outcome will continue when there is a lucky day (e.g., a player is taking a shot in a sport such as a basketball after a series of successful attempts). Ayton and Fisher ( 2004 ) argued that, although these two practices are quite opposite, they have both been classified under the heuristic of representativeness. In the two cases, a flawed idea of random events drives observers to anticipate that a certain stream of results is representative of the whole procedure. In the first scenario of coin flipping, people tend to believe that a long stream of tails should not occur; hence the head is predicted. While in the case of the sports player, the stream of the same outcome is expected to continue (Gilovich et al., 1985 ). Therefore, representativeness cannot be diagnosed without considering in advance the expected results. Also, the heuristic does not clarify why people have the urge to believe that a stream of random events should have a representative, while in real life, it does not (Ayton and Fischer, 2004 ).

Nevertheless, the most common critique of Kahneman and Tversky is the idea that ‘we cannot be that dumb’. It states that the heuristics and biases programme is overly pessimistic when assessing the average human decision-making. Also, humans collectively have accumulated many achievements and discoveries throughout human history that would not have been possible if their ability to adequate decision-making had been so limited (Gilovich and Griffin, 2002 ).

Similarly, the probabilistic mental models (PMM) theory of human inference inspired by Simon and pioneered by Gigerenzer has also been exposed to criticism (B.R. Newell et al., 2003 ). Indeed, the enticing character of heuristics that they are both easy to apply and efficient has made them famous within different domains. However, it has also made them vulnerable to replications or variations of the experiments that challenge the original results. For example, Daniel Oppenheimer ( 2003 ) argues that the recognition heuristic (RH) could not yield satisfactory results after replicating the experiment of city populations. He claims that the participants’ judgements failed to obey the RH not just when there were cues other and stronger than mere recognition but also in circumstances where recognition would have been the best cue available. In any case, one could claim that there are numerous methods in the adaptive toolbox and that under certain conditions, people may prefer to use heuristics other than the RH. However, this statement is also questionable since many heuristics that are thought to exist in the adaptive toolbox acknowledge the RH as an initial step (Gigerenzer and Todd, 1999 ). Hence, if individuals are not using the RH, they cannot use many of the other heuristics in the adaptive toolbox (Oppenheimer, 2003 ). Likewise, Newell et al. ( 2003 ) question whether the fast-and-frugal heuristics accurately explain actual human behaviour. In two experiments, they challenged the take-the-best (TTB) heuristic, as it is considered a building block in the PMM framework. The outcomes of these experiments, together with others, such as those of Jones et al. ( 2000 ) and Bröder ( 2000 ), show that the TTB heuristic is not a reliable approach even within circumstances favouring its use. In a somewhat heated debate published in the Psychological Review 1996, Gigerenzer’s criticism of Kahneman and Tversky that many of the so-called biases ‘disappear’ if frequencies rather than probabilities are assumed, was countered by Kahneman and Tversky ( 1996 ) by means of a detailed re-examination of the conjunction fallacy (or Linda Problem). Gigerenzer ( 1996 ) remained unconvinced, and was in turn, blamed by Kahneman and Tversky ( 1996 , p. 591) for just reiterating ‘his objections … without answering our main arguments’.

Our historical review has revealed a number of issues that have received little attention in the literature.

Deliberate vs. automatic heuristics

We have differentiated between deliberate and automatic heuristics, which often seem to be confused in the literature. While it is a widely shared view today that the human brain often relies heavily on the fast and effortless ‘System 1’ in decision-making, but can also use the more demanding tools of ‘System 2’, and it has been acknowledged, e.g. by Kahneman ( 2011 , p. 98), that some heuristics belong to System 1 and others to System 2, the two systems are not as clearly distinct as it may seem. In fact, the very wide range of what one may call ‘heuristics’ shows that there is a whole spectrum of fallible decision-making procedures—ranging from the probably innate problem-solving strategy of the baby that cries whenever it is hungry or has some other problem, to the most elaborate and sophisticated procedures of, e.g., Polya, Bolzano, or contemporary chess-engines. One may be tempted to characterize instinctive procedures as subconscious and sophisticated ones as conscious, but a deliberate heuristic can very well become a subconsciously applied ‘habit of the mind’ or learnt routine with experience and repetition. Vice versa, automatic, subconscious heuristics can well be raised to consciousness and be applied deliberately. E.g., the ‘inductive inference’ from tasty strawberries to the assumption that all red berries are sweet and edible may be quite automatic and subconscious in little children, but the philosophical literature on induction shows that it can be elaborated into something quite conscious. However, while the notion of consciousness may be crucial for an adequate understanding of heuristics in human cognition, for the time being, it seems to remain a philosophical mystery (Harley, 2021 ; Searle, 1997 ), and once programmed, sophisticated heuristic algorithms can be executed by automata.

The deliberate heuristics that we reviewed also illustrate that some of them can hardly be called ‘simple’, ‘shortcuts’, or ‘rules of thumb’. E.g., the heuristics of Descartes, Bolzano, or Polya each consist of a structured set of suggestions, and, e.g., ‘to devise a plan’ for a mathematical proof is certainly not a shortcut. Llull ( 1308 , p. 329), to take another example, wrote of his ‘ars magna’ that ‘the best kind of intellect can learn it in two months: one month for theory and another month for practice’.

Heuristics vs. algorithms

Our review of heuristics also allowed us to clarify the distinction between heuristics and algorithms. As evidenced by our glimpse at computer science, there are procedures that are quite obviously both an algorithm and a heuristic. Within computer science, they are in fact quite common. Algorithms of the heuristic type may be required for certain problems even though an algorithm that finds the optimal solution exists ‘in principle’—as in the case of determining the optimal strategy in chess, where the brute-force-method to enumerate all possible plays of chess is just not practically feasible. In other cases, heuristic algorithms are used because an exhaustive search, while practically feasible, would be too costly or time-consuming. Clearly, for many problems, there are also problem-solving algorithms which always do produce the optimal solution in a reasonable time frame. Given our definition of a heuristic as a fallible method, algorithms of this kind are counterexamples to the complaint that the notion has become so wide that ‘any procedure can be called a heuristic’. However, as we have seen, there are also heuristic procedures that are non-algorithmic. These may be necessary either because the problem to be solved is not sufficiently well-defined to allow for an algorithm, or because an algorithm that would solve the problem at hand, is not known or does not exist. Kleining’s qualitative heuristics is an example of non-algorithmic heuristics necessitated by the ill-defined problems of research in the social sciences, while Polya’s heuristic for solving mathematical problems is an example of the latter: an algorithm that would allow one to decide if a given mathematical conjecture is a theorem or not does not exist (cf. Davis, 1965 ).

Pre-SEU vs. post-SEU heuristics

As we noted in the introduction, the emergence of the SEU theory can be regarded as a kind of watershed for the research on heuristics, as it came to be regarded as the standard definition of rational choice. Post-SEU, fallible methods of decision-making would have to face comparison with this standard. Gigerenzer’s almost belligerent criticism of SEU shows that even today it seems difficult to discuss the pros and cons of heuristics unless one relates them to the backdrop of SEU. However, his criticism of SEU is mostly en passant and seems to assume that the SEU model requires ‘known probabilities’ (e.g., Gigerenzer, 2021 ), ignoring the fact that it is, in general, subjective probabilities, as derived from the agent’s preferences among lotteries, that the model relies on (cf. e.g., Jeffrey, 1967 or Gilboa, 2011 ). In fact, when applied to an ill-defined decision problem in, e.g., management, the SEU theory may well be regarded as a heuristic—it asks you to consider the possible consequences of the relevant set of actions, your preferences among those consequences, and the likelihood of those consequences. To the extent that one may get all of these elements wrong, SEU is a fallible method of decision-making. To be sure, it is not a fast and effortless heuristic, but our historical review of pre-SEU heuristics has illustrated that heuristics may be quite elaborate and require considerable effort and attention.

It is quite true, of course, that the SEU heuristic will hardly be helpful in problem-solving that is not ‘just’ decision-making. If, e.g., the problem to be solved is to find a proof for a mathematical conjecture, the set of possible actions will in general be too vast to be practically contemplated, let alone evaluated according to preferences and probabilities.

Positive vs. negative heuristics

To the extent that the study of heuristics aims at understanding how decisions are actually made, it is not only positive heuristics that need to be considered. It will also be required to investigate the conditions that may prevent the agent from adopting certain courses of action. As we saw, Lakatos used the notion of negative heuristics quite explicitly to characterize research programmes, but we also briefly review Duncker’s notion of ‘functional fixedness’ as an example of a hindrance to adequate problem-solving. A systematic study of such negative heuristics seems to be missing in the literature and we believe that it may be a helpful complement to the study of positive heuristics which has dominated the literature that we reviewed.

To the extent that heuristics are studied with the normative aim of identifying effective heuristics, it may also be useful to consider approaches that should not be taken. ‘Do not try to optimize!’ might be a negative heuristic favoured by the fast-and-frugal school of thought.

Heuristics as the product of evolution

Clearly, heuristics have always existed throughout the development of human knowledge due to the ‘old mind’s’ evolutionary roots and the frequent necessity to apply fast and sufficiently reliable behaviour patterns. However, unlike the behaviour patterns in the other animals, the methods used by humans in problem-solving are sufficiently diverse that the dual-process theory was suggested to provide some structure to the rich ‘toolbox’ humans can and do apply. As all our human DNA is the product of evolution, it is not only the intuitive inclinations to react to certain stimuli in a particular way that must be seen as the product of evolution, but also our ability to abstain from following our gut feelings when there is reason to do so, to reflect and analyse the situation before we embark on a particular course of action. Quite frequently, we experience a tension between our intuitive inclinations and our analytic mind’s judgement, but both of them are somehow the product of evolution, our biography, and the environment. Thus, to point out that gut feelings are an evolved capacity of the brain does in no way provide an argument that would support their superiority over the reflective mind.

Moreover, compared to the speed of problem change in our human lifetimes, biological evolution is very slow. The evolved capacities of the human brain may have been well-adapted to the survival needs of our ancestors some 300,000 years ago, but there is little reason to believe that they are uniformly well-adapted to human problem-solving in the 21st century.

Resource-bounded and ecological rationality

Throughout our review, the reader will have noticed that many heuristics have been suggested for specific problem areas. The methods of the ancient Greeks were mainly centred around solving geometrical problems. Llull was primarily concerned with theological questions, Descartes and Leibniz pursued ‘mechanical’ solutions to philosophical issues, Polya suggested heuristics for Mathematics, Müller for engineering, and Kleining for social science research. This already suggests that heuristics suitable for one type of problem need not be suitable for a different type. Likewise, the automatic heuristics that both the Kahneman-Tversky and the Gigerenzer schools focused on, are triggered by particular tasks. Simon’s observation that the success of a given heuristic will depend on the environment in which it is employed, is undoubtedly an important one that has motivated Gigerenzer’s notion of ecological rationality and is strikingly absent from the SEU model. If ‘environment’ is taken in a broad sense that includes the available resources, the cost of time and effort, the notion seems to cover what has been called resource-rational behaviour (e.g., Bhui et al., 2021 ).

Avenues of further research

A comprehensive study describing the current status of the research on heuristics and their relation to SEU seems to be missing and is beyond the scope of our brief historical review. Insights into their interrelationship can be expected from recent attempts at formal modelling of human cognition that take the issues of limited computational resources and context-dependence of decision-making seriously. E.g., Lieder and Griffiths ( 2020 ) do this from a Bayesian perspective, while Busemeyer et al. ( 2011 ) and Pothos and Busemeyer ( 2022 ) use a generalization of standard Kolmogorov probability theory that is also the basis of quantum mechanics and quantum computation. While it may seem at first glance that such modelling assumes even more computational power than the standard SEU model of decision-making, the computational power is not assumed on the part of the human decision-maker. Rather, the claim is that the decision-maker behaves as if s/he would solve an optimization problem under additional constraints, e.g., on computational resources. The ‘as if’ methodology that is employed here is well-known to economists (Friedman, 1953 ; Mäki, 1998 ) and also to mathematical biologists who have used Bayesian models to explain animal behaviour (McNamara et al., 2006 ; Oaten, 1977 ; Pérez-Escudero and de Polavieja, 2011 ). Evolutionary arguments might be invoked to support this methodology if a survival disadvantage can be shown to result from behaviour patterns that are not Bayesian optimal, but we are not aware of research that would substantiate such arguments. However, attempting to do so by embedding formal models of cognition in models of evolutionary game theory may be a promising avenue for further research.

NP stands for ‘nondeterministic polynomial-time’, which indicates that the optimal solution can be found by a nondeterministic Turing-machine in a running time that is bounded by a polynomial function of the input size. In fact, the TSP is ‘NP-hard’ which means that it is ‘at least as hard as the hardest problems in the category of NP problems’.

Agre P, Horswill I (1997) Lifeworld analysis. J Artif Intell Res 6:111–145

Article   Google Scholar  

Ayton P, Fischer I (2004) The hot hand fallacy and the gambler’s fallacy. Two faces of subjective randomness. Memory Cogn 32:8

Banse G, Friedrich K (2000) Konstruieren zwischen Kunst und Wissenschaft. Edition Sigma, Idee‐Entwurf‐Gestaltung, Berlin

Google Scholar  

Baron J (2000) Thinking and deciding. Cambridge University Press

Barron G, Leider S (2010) The role of experience in the Gambler’s Fallacy. J Behav Decision Mak 23:1

Barros G (2010) Herbert A Simon and the concept of rationality: boundaries and procedures. Brazilian. J Political Econ 30:3

Baumeister RF, Vohs KD (2007) Encyclopedia of social psychology, vol 1. SAGE

Bazerman MH, Moore DA (1994) Judgment in managerial decision making. Wiley, New York

Bentley JL (1982) Writing efficient programs Prentice-Hall software series. Prentice-Hall

Bhui R, Lai L, Gershman S (2021) Resource-rational decision making. Curr Opin Behav Sci 41:15–21. https://doi.org/10.1016/j.cobeha.2021.02.015

Bolzano B (1837) Wissenschaftslehre. Seidelsche Buchhandlung, Sulzbach

Bossaerts P, Murawski C (2017) Computational complexity and human decision-making. Trends Cogn Sci 21(12):917–929

Article   PubMed   Google Scholar  

Boyer CB (1991) The Arabic Hegemony. A History of Mathematics. Wiley, New York

Bröder A (2000) Assessing the empirical validity of the “Take-the-best” heuristic as a model of human probabilistic inference. J Exp Psychol Learn Mem Cogn 26:5

Burke E, Kendall G, Newall J, Hart E, Ross P, Schulenburg S (2003) Hyper-heuristics: an emerging direction in modern search technology. In: Glover F, Kochenberger GA (eds) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Busemeyer JR, Pothos EM, Franco R, Trueblood JS (2011) A quantum theoretical explanation for probability judgment errors. Psychol Rev 118(2):193

Buss DM, Kenrick DT (1998) Evolutionary social psychology. In: D T Gilbert, S T Fiske, G Lindzey (eds.), The handbook of social psychology. McGraw-Hill, p. 982–1026

Byron M (1998) Satisficing and optimality. Ethics 109:1

Davis M (ed) (1965) The undecidable. Basic papers on undecidable propositions, unsolvable problems and computable functions. Raven Press, New York

MATH   Google Scholar  

Debreu G (1959) Theory of value: an axiomatic analysis of economic equilibrium. Yale University Press

Descartes R (1908) Rules for the Direction of the Mind. In: Oeuvres de Descartes, vol 10. In: Adam C, Tannery P (eds). J Vrin, Paris

Descartes R (1998) Discourse on the method for conducting one’s reason well and for seeking the truth in the sciences (1637) (trans and ed: Cress D). Hackett, Indianapolis

Dunbar RIM (1998) Grooming, gossip, and the evolution of language. Harvard University Press

Duncker K (1935) Zur Psychologie des produktiven Denkens. Springer

Englich B, Mussweiler T, Strack F (2006) Playing dice with criminal sentences: the influence of irrelevant anchors on experts’ judicial decision making. Personal Soc Psychol Bull 32:2

Evans JSB (2010) Thinking twice: two minds in one brain. Oxford University Press

Farr RM (1996) The roots of modern social psychology, 1872–1954. Blackwell Publishing

Farris PW, Bendle N, Pfeifer P, Reibstein D (2010) Marketing metrics: the definitive guide to measuring marketing performance. Pearson Education

Fidora A, Sierra C (2011) Ramon Llull, from the Ars Magna to artificial intelligence. Artificial Intelligence Research Institute, Barcelona

Frantz R (2003) Herbert Simon Artificial intelligence as a framework for understanding intuition. J Econ Psychol 24:2. https://doi.org/10.1016/S0167-4870(02)00207-6

Friedman M (1953) The methodology of positive economics. In: Friedman M (ed) Essays in positive economics. University of Chicago Press

Ghiselin MT (1973) Darwin and evolutionary psychology. Science (New York, NY) 179:4077

Gibbons A (2007) Paleoanthropology. Food for thought. Science (New York, NY) 316:5831

Gigerenzer G (1996) On narrow norms and vague heuristics: a reply to Kahneman and Tversky. 1939–1471

Gigerenzer G (2000) Adaptive thinking: rationality in the real world. Oxford University Press, USA

Gigerenzer G (2008) Why heuristics work. Perspect Psychol Sci 3:1

Gigerenzer G (2015) Simply rational: decision making in the real world. Evol Cogn

Gigerenzer G (2021) Embodied heuristics. Front Psychol https://doi.org/10.3389/fpsyg.2021.711289

Gigerenzer G, Gaissmaier W (2011) Heuristic decision making. Annual Review of Psychology 62, p 451–482

Gigerenzer G, Goldstein DG (1996) Reasoning the fast and frugal way: models of bounded rationality. Psychol Rev 103:4

Gigerenzer G, Selten R (eds) (2001) Bounded rationality: the adaptive toolbox. MIT Press

Gigerenzer G, Todd PM (1999) Simple heuristics that make us smart. Oxford University Press, USA

Gilboa I (2011) Making better decisions. Decision theory in practice. Wiley-Blackwell

Gilovich T, Griffin D (2002) Introduction—heuristics and biases: then and now in heuristics and biases: the psychology of intuitive judgment (8). Cambridge University Press

Gilovich T, Vallone R, Tversky A (1985) The hot hand in basketball: on the misperception of random sequences. Cogn Psychol 17:3

Glaveanu VP (2019) The creativity reader. Oxford University Press

Glover F, Kochenberger GA (eds) (2003) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Goldstein DG, Gigerenzer G (2002) Models of ecological rationality: the recognition heuristic. Psychol Rev 109:1

Graefe A, Armstrong JS (2012) Predicting elections from the most important issue: a test of the take-the-best heuristic. J Behav Decision Mak 25:1

Groner M, Groner R, Bischof WF (1983) Approaches to heuristics: a historical review. In: Groner R, Groner M, Bischof WF (eds) Methods of heuristics. Erlbaum

Groner R, Groner M (1991) Heuristische versus algorithmische Orientierung als Dimension des individuellen kognitiven Stils. In: Grawe K, Semmer N, Hänni R (Hrsg) Üher die richtige Art, Psychologie zu betreiben. Hogrefe, Göttingen

Gugerty L (2006) Newell and Simon’s logic theorist: historical background and impact on cognitive modelling. In: Proceedings of the human factors and ergonomics society annual meeting. Symposium conducted at the meeting of SAGE Publications. Sage, Los Angeles, CA

Harel D (2000) Computers Ltd: what they really can’t do. Oxford University Press

Harley TA (2021) The science of consciousness: waking, sleeping and dreaming. Cambridge University Press

Harris B (1979) Whatever happened to little Albert? Am Psychol 34:2

Heath TL (1926) The thirteen books of Euclid’s elements. Introduction to vol I, 2nd edn. Cambridge University Press

Hertwig R, Pachur T (2015) Heuristics, history of. In: International encyclopedia of the social behavioural sciences. Elsevier, pp. 829–835

Hilton DJ (1995) The social context of reasoning: conversational inference and rational judgment. Psychol Bull 118:2

Hopcroft JE, Motwani R, Ullman JD (2007) Introduction to Automata Theory, languages, and computation. Addison Wesley, Boston/San Francisco/New York

Jeffrey R (1967) The logic of decision, 2nd edn. McGraw-Hill

Jones S, Juslin P, Olsson H, Winman A (2000) Algorithm, heuristic or exemplar: Process and representation in multiple-cue judgment. In: Proceedings of the 22nd annual conference of the Cognitive Science Society. Symposium conducted at the meeting of Erlbaum, Hillsdale, NJ

Kahneman D (2011) Thinking, fast and slow. Farar, Straus and Giroux

Kahneman D, Klein G (2009) Conditions for intuitive expertise: a failure to disagree. Am Psychol 64:6

Kahneman D, Tversky A (1996) On the reality of cognitive illusions. In: Psychological Review, 103(3), p 582–591

Khaldun I (1967) The Muqaddimah. An introduction to history (trans: Arabic by Rosenthal F). Abridged and edited by Dawood NJ. Princeton University Press

Klein G (2001) The fiction of optimization. In: Gigerenzer G, Selten R (eds) Bounded Rationality: The Adaptive Toolbox. MIT Press Editors

Kleining G (1982) Umriss zu einer Methodologie qualitativer Sozialforschung. Kölner Z Soziol Sozialpsychol 34:2

Kleining G (1995) Von der Hermeneutik zur qualitativen Heuristik. Beltz

Lakatos I (1970) Falsification and the methodology of scientific research programmes. In: Lakatos I, Musgrave A (eds) Criticism and the growth of knowledge. Cambridge University Press

Leibniz GW (1880) Die Philosophischen Schriften von GW Leibniz IV, hrsg von CI Gerhardt

Lerner RM (1978) Nature Nurture and Dynamic Interactionism. Human Development 21(1):1–20. https://doi.org/10.1159/000271572

Lieder F, Griffiths TL (2020) Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences. Vol 43, e1. Cambridge University Press

Link D (2010) Scrambling TRUTH: rotating letters as a material form of thought. Variantology 4, p. 215–266

Llull R (1308) Ars Generalis Ultima (trans: Dambergs Y), Yanis Dambergs, https://lullianarts.narpan.net/

Luan S, Reb J, Gigerenzer G (2019) Ecological rationality: fast-and-frugal heuristics for managerial decision-making under uncertainty. Acad Manag J 62:6

Mäki U (1998) As if. In: Davis J, Hands DW, Mäki U (ed) The handbook of economic methodology. Edward Elgar Publishing

Martí R, Pardalos P, Resende M (eds) (2018) Handbook of heuristics. Springer, Cham

McDougall W (2015) An introduction to social psychology. Psychology Press

McNamara JM, Green RF, Olsson O (2006) Bayes’ theorem and its applications in animal behaviour. Oikos 112(2):243–251. http://www.jstor.org/stable/3548663

Newborn M (1997) Kasparov versus Deep Blue: computer chess comes of age. Springer

Newell A, Shaw JC, Simon HA (1959) Report on a general problem-solving program. In: R. Oldenbourg (ed) IFIP congress. UNESCO, Paris

Newell A, Simon HA (1972) Human problem solving. Prentice-Hall, Englewood Cliffs, NJ

Newell BR (2013) Judgment under uncertainty. In: Reisberg D (ed) The Oxford handbook of cognitive psychology. Oxford University Press

Newell BR, Weston NJ, Shanks DR (2003) Empirical tests of a fast-and-frugal heuristic: not everyone “takes the best”. Organ Behav Hum Decision Processes 91:1

Oaten A (1977) Optimal foraging in patches: a case for stochasticity. Theor Popul Biol 12(3):263–285

Article   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Oppenheimer DM (2003) Not so fast! (and not so frugal!): rethinking the recognition heuristic. Cognition 90:1

Pachur T, Marinello G (2013) Expert intuitions: how to model the decision strategies of airport customs officers? Acta Psychol 144:1

Pearl J (1984) Heuristics: intelligent search strategies for computer problem solving. Addison-Wesley Longman Publishing Co, Inc

Pérez-Escudero A, de Polavieja G (2011) Collective animal behaviour from Bayesian estimation and probability matching. Nature Precedings

Pinheiro CAR, McNeill F (2014) Heuristics in analytics: a practical perspective of what influences our analytical world. Wiley Online Library

Polya G (1945) How to solve it. Princeton University Press

Polya G (1954) Induction and analogy in mathematics. Princeton University Press

Pombo O (2002) Leibniz and the encyclopaedic project. In: Actas do Congresso Internacional Ciência, Tecnologia Y Bien Comun: La atualidad de Leibniz

Pothos EM, Busemeyer JR (2022) Quantum cognition. Annu Rev Psychol 73:749–778

Priest G (2008) An introduction to non-classical logic: from if to is. Cambridge University Press

Book   MATH   Google Scholar  

Ramsey FP (1926) Truth and probability. In: Braithwaite RB (ed) The foundations of mathematics and other logical essays. McMaster University Archive for the History of Economic Thought. https://EconPapers.repec.org/RePEc:hay:hetcha:ramsey1926

Reimer T, Katsikopoulos K (2004) The use of recognition in group decision-making. Cogn Sci 28:6

Reisberg D (ed) (2013) The Oxford handbook of cognitive psychology. Oxford University Press

Rieskamp J, Otto PE (2006) SSL: a theory of how people learn to select strategies. J Exp Psychol Gen 135:2

Ritchey T (2022) Ramon Llull and the combinatorial art. https://www.swemorph.com/amg/pdf/ars-morph-1-draft-ch-4.pdf

Ritter J, Gründer K, Gabriel G, Schepers H (2017) Historisches Wörterbuch der Philosophie online. Schwabe Verlag

Russell SJ, Norvig P, Davis E (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice-Hall series in artificial intelligence. Prentice-Hall

Savage LJ (ed) (1954) The foundations of statistics. Courier Corporation

Schacter D, Gilbert D, Wegner D (2011) Psychology, 2nd edn. Worth

Schaeffer J, Burch N, Bjornsson Y, Kishimoto A, Muller M, Lake R, Lu P, Sutphen S (2007) Checkers is solved. Science 317(5844):1518–1522

Article   ADS   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Schreurs BG (1989) Classical conditioning of model systems: a behavioural review. Psychobiology 17:2

Scopus (2022) Search “heuristics”. https://www.scopus.com/standard/marketing.uri (TITLE-ABS-KEY(heuristic) AND (LIMIT-TO (SUBJAREA,"DECI") OR LIMIT-TO (SUBJAREA,"SOCI") OR LIMIT-TO (SUBJAREA,"BUSI"))) Accessed on 16 Apr 2022

Searle JR (1997) The mystery of consciousness. Granta Books

Semaan G, Coelho J, Silva E, Fadel A, Ochi L, Maculan N (2020) A brief history of heuristics: from Bounded Rationality to Intractability. IEEE Latin Am Trans 18(11):1975–1986. https://latamt.ieeer9.org/index.php/transactions/article/view/3970/682

Sen S (2020) The environment in evolution: Darwinism and Lamarckism revisited. Harvest Volume 1(2):84–88. https://doi.org/10.2139/ssrn.3537393

Shah AK, Oppenheimer DM (2008) Heuristics made easy: an effort-reduction framework. Psychol Bull 134:2

Siitonen A (2014) Bolzano on finding out intentions behind actions. In: From the ALWS archives: a selection of papers from the International Wittgenstein Symposia in Kirchberg am Wechsel

Simon HA (1955) A behavioural model of rational choice. Q J Econ 69:1

Simon HA, Newell A (1958) Heuristic problem solving: the next advance in operations research. Oper Res 6(1):1–10. http://www.jstor.org/stable/167397

Article   MATH   Google Scholar  

Smith R (2020) Aristotle’s logic. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, 2020th edn. Metaphysics Research Lab, Stanford University

Smulders TV (2009) Darwin 200: special feature on brain evolution. Biology Letters 5(1), p. 105–107

Sörensen K, Sevaux M, Glover F (2018) A history of metaheuristics. In: Martí R, Pardalos P, Resende M (eds) Handbook of heuristics. Springer, Cham

Stephenson N (2003) Theoretical psychology: critical contributions. Captus Press

Strack F, Mussweiler T (1997) Explaining the enigmatic anchoring effect: mechanisms of selective accessibility. J Person Soc Psychol 73:3

Sullivan D (2002) How search engines work. SEARCH ENGINE WATCH, at http://www.searchenginewatch.com/webmasters/work.Html (Last Updated June 26, 2001) (on File with the New York University Journal of Legislation and Public Policy). http://www.searchenginewatch.com

Suppes P (1983) Heuristics and the axiomatic method. In: Groner R et al (ed) Methods of Heuristics. Routledge

Turing A (1937) On computable numbers, with an application to the entscheidungsproblem. Proc Lond Math Soc s2-42(1):230–265

Article   MathSciNet   MATH   Google Scholar  

Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cogn Psychol 5:2

Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science (New York, NY) 185::4157

Vardi MY (2012) Artificial intelligence: past and future. Commun ACM 55:1

Vikhar PA (2016) Evolutionary algorithms: a critical review and its future prospects. Paper presented at the international conference on global trends in signal processing, information computing and communication (ICGTSPICC). IEEE, pp. 261–265

Volz V, Rudolph G, Naujoks B (2016) Demonstrating the feasibility of automatic game balancing. Paper presented at the proceedings of the Genetic and Evolutionary Computation Conference, pp. 269–276

von Neumann J, Morgenstern O (1944) Theory of games and economic behaviour. Princeton University Press, Princeton, p. 1947

Zermelo E (1913) Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. In: Proceedings of the fifth international congress of mathematicians. Symposium conducted at the meeting of Cambridge University Press, Cambridge. Cambridge University Press, Cambridge

Zilio D (2013) Filling the gaps: skinner on the role of neuroscience in the explanation of behavior. Behavior and Philosophy, 41, p. 33–59

Download references

Acknowledgements

We would like to extend our sincere thanks to the reviewers for their valuable time and effort in reviewing our work. Their insightful comments and suggestions have greatly improved the quality of our manuscript.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

HHL Leipzig Graduate School of Management, Leipzig, Germany

Mohamad Hjeij & Arnis Vilks

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mohamad Hjeij .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hjeij, M., Vilks, A. A brief history of heuristics: how did research on heuristics evolve?. Humanit Soc Sci Commun 10 , 64 (2023). https://doi.org/10.1057/s41599-023-01542-z

Download citation

Received : 25 July 2022

Accepted : 30 January 2023

Published : 17 February 2023

DOI : https://doi.org/10.1057/s41599-023-01542-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Socialisation approach to ai value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence.

  • Joel Janhonen

AI and Ethics (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

explain algorithms and heuristics as strategies of problem solving

Logo for UH Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Thinking and Intelligence

Problem Solving

OpenStaxCollege

[latexpage]

Learning Objectives

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

PROBLEM-SOLVING STRATEGIES

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( [link] ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( [link] ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

A four column by four row Sudoku puzzle is shown. The top left cell contains the number 3. The top right cell contains the number 2. The bottom right cell contains the number 1. The bottom left cell contains the number 4. The cell at the intersection of the second row and the second column contains the number 4. The cell to the right of that contains the number 1. The cell below the cell containing the number 1 contains the number 2. The cell to the left of the cell containing the number 2 contains the number 3.

Here is another popular type of puzzle ( [link] ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

A square shaped outline contains three rows and three columns of dots with equal space between them.

Take a look at the “Puzzling Scales” logic puzzle below ( [link] ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”

PITFALLS TO PROBLEM SOLVING

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

explain algorithms and heuristics as strategies of problem solving

Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in [link] .

Please visit this site to see a clever music video that a high school teacher made to explain these and other cognitive biases to his AP psychology students.

Were you able to determine how many marbles are needed to balance the scales in [link] ? You need nine. Were you able to solve the problems in [link] and [link] ? Here are the answers ( [link] ).

The first puzzle is a Sudoku grid of 16 squares (4 rows of 4 squares) is shown. Half of the numbers were supplied to start the puzzle and are colored blue, and half have been filled in as the puzzle’s solution and are colored red. The numbers in each row of the grid, left to right, are as follows. Row 1:  blue 3, red 1, red 4, blue 2. Row 2: red 2, blue 4, blue 1, red 3. Row 3: red 1, blue 3, blue 2, red 4. Row 4: blue 4, red 2, red 3, blue 1.The second puzzle consists of 9 dots arranged in 3 rows of 3 inside of a square. The solution, four straight lines made without lifting the pencil, is shown in a red line with arrows indicating the direction of movement. In order to solve the puzzle, the lines must extend beyond the borders of the box. The four connecting lines are drawn as follows. Line 1 begins at the top left dot, proceeds through the middle and right dots of the top row, and extends to the right beyond the border of the square. Line 2 extends from the end of line 1, through the right dot of the horizontally centered row, through the middle dot of the bottom row, and beyond the square’s border ending in the space beneath the left dot of the bottom row. Line 3 extends from the end of line 2 upwards through the left dots of the bottom, middle, and top rows. Line 4 extends from the end of line 3 through the middle dot in the middle row and ends at the right dot of the bottom row.

Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.

Review Questions

A specific formula for solving a problem is called ________.

  • an algorithm
  • a heuristic
  • a mental set
  • trial and error

A mental shortcut in the form of a general problem-solving framework is called ________.

Which type of bias involves becoming fixated on a single trait of a problem?

  • anchoring bias
  • confirmation bias
  • representative bias
  • availability bias

Which type of bias involves relying on a false stereotype to make a decision?

Critical Thinking Questions

What is functional fixedness and how can overcoming it help you solve problems?

Functional fixedness occurs when you cannot see a use for an object other than the use for which it was intended. For example, if you need something to hold up a tarp in the rain, but only have a pitchfork, you must overcome your expectation that a pitchfork can only be used for garden chores before you realize that you could stick it in the ground and drape the tarp on top of it to hold it up.

How does an algorithm save you time and energy when solving a problem?

An algorithm is a proven formula for achieving a desired outcome. It saves time because if you follow it exactly, you will solve the problem without having to figure out how to solve the problem. It is a bit like not reinventing the wheel.

Personal Application Question

Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

Problem Solving Copyright © 2014 by OpenStaxCollege is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Problem-Solving Strategies and Obstacles

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

explain algorithms and heuristics as strategies of problem solving

Sean is a fact-checker and researcher with experience in sociology, field research, and data analytics.

explain algorithms and heuristics as strategies of problem solving

JGI / Jamie Grill / Getty Images

  • Application
  • Improvement

From deciding what to eat for dinner to considering whether it's the right time to buy a house, problem-solving is a large part of our daily lives. Learn some of the problem-solving strategies that exist and how to use them in real life, along with ways to overcome obstacles that are making it harder to resolve the issues you face.

What Is Problem-Solving?

In cognitive psychology , the term 'problem-solving' refers to the mental process that people go through to discover, analyze, and solve problems.

A problem exists when there is a goal that we want to achieve but the process by which we will achieve it is not obvious to us. Put another way, there is something that we want to occur in our life, yet we are not immediately certain how to make it happen.

Maybe you want a better relationship with your spouse or another family member but you're not sure how to improve it. Or you want to start a business but are unsure what steps to take. Problem-solving helps you figure out how to achieve these desires.

The problem-solving process involves:

  • Discovery of the problem
  • Deciding to tackle the issue
  • Seeking to understand the problem more fully
  • Researching available options or solutions
  • Taking action to resolve the issue

Before problem-solving can occur, it is important to first understand the exact nature of the problem itself. If your understanding of the issue is faulty, your attempts to resolve it will also be incorrect or flawed.

Problem-Solving Mental Processes

Several mental processes are at work during problem-solving. Among them are:

  • Perceptually recognizing the problem
  • Representing the problem in memory
  • Considering relevant information that applies to the problem
  • Identifying different aspects of the problem
  • Labeling and describing the problem

Problem-Solving Strategies

There are many ways to go about solving a problem. Some of these strategies might be used on their own, or you may decide to employ multiple approaches when working to figure out and fix a problem.

An algorithm is a step-by-step procedure that, by following certain "rules" produces a solution. Algorithms are commonly used in mathematics to solve division or multiplication problems. But they can be used in other fields as well.

In psychology, algorithms can be used to help identify individuals with a greater risk of mental health issues. For instance, research suggests that certain algorithms might help us recognize children with an elevated risk of suicide or self-harm.

One benefit of algorithms is that they guarantee an accurate answer. However, they aren't always the best approach to problem-solving, in part because detecting patterns can be incredibly time-consuming.

There are also concerns when machine learning is involved—also known as artificial intelligence (AI)—such as whether they can accurately predict human behaviors.

Heuristics are shortcut strategies that people can use to solve a problem at hand. These "rule of thumb" approaches allow you to simplify complex problems, reducing the total number of possible solutions to a more manageable set.

If you find yourself sitting in a traffic jam, for example, you may quickly consider other routes, taking one to get moving once again. When shopping for a new car, you might think back to a prior experience when negotiating got you a lower price, then employ the same tactics.

While heuristics may be helpful when facing smaller issues, major decisions shouldn't necessarily be made using a shortcut approach. Heuristics also don't guarantee an effective solution, such as when trying to drive around a traffic jam only to find yourself on an equally crowded route.

Trial and Error

A trial-and-error approach to problem-solving involves trying a number of potential solutions to a particular issue, then ruling out those that do not work. If you're not sure whether to buy a shirt in blue or green, for instance, you may try on each before deciding which one to purchase.

This can be a good strategy to use if you have a limited number of solutions available. But if there are many different choices available, narrowing down the possible options using another problem-solving technique can be helpful before attempting trial and error.

In some cases, the solution to a problem can appear as a sudden insight. You are facing an issue in a relationship or your career when, out of nowhere, the solution appears in your mind and you know exactly what to do.

Insight can occur when the problem in front of you is similar to an issue that you've dealt with in the past. Although, you may not recognize what is occurring since the underlying mental processes that lead to insight often happen outside of conscious awareness .

Research indicates that insight is most likely to occur during times when you are alone—such as when going on a walk by yourself, when you're in the shower, or when lying in bed after waking up.

How to Apply Problem-Solving Strategies in Real Life

If you're facing a problem, you can implement one or more of these strategies to find a potential solution. Here's how to use them in real life:

  • Create a flow chart . If you have time, you can take advantage of the algorithm approach to problem-solving by sitting down and making a flow chart of each potential solution, its consequences, and what happens next.
  • Recall your past experiences . When a problem needs to be solved fairly quickly, heuristics may be a better approach. Think back to when you faced a similar issue, then use your knowledge and experience to choose the best option possible.
  • Start trying potential solutions . If your options are limited, start trying them one by one to see which solution is best for achieving your desired goal. If a particular solution doesn't work, move on to the next.
  • Take some time alone . Since insight is often achieved when you're alone, carve out time to be by yourself for a while. The answer to your problem may come to you, seemingly out of the blue, if you spend some time away from others.

Obstacles to Problem-Solving

Problem-solving is not a flawless process as there are a number of obstacles that can interfere with our ability to solve a problem quickly and efficiently. These obstacles include:

  • Assumptions: When dealing with a problem, people can make assumptions about the constraints and obstacles that prevent certain solutions. Thus, they may not even try some potential options.
  • Functional fixedness : This term refers to the tendency to view problems only in their customary manner. Functional fixedness prevents people from fully seeing all of the different options that might be available to find a solution.
  • Irrelevant or misleading information: When trying to solve a problem, it's important to distinguish between information that is relevant to the issue and irrelevant data that can lead to faulty solutions. The more complex the problem, the easier it is to focus on misleading or irrelevant information.
  • Mental set: A mental set is a tendency to only use solutions that have worked in the past rather than looking for alternative ideas. A mental set can work as a heuristic, making it a useful problem-solving tool. However, mental sets can also lead to inflexibility, making it more difficult to find effective solutions.

How to Improve Your Problem-Solving Skills

In the end, if your goal is to become a better problem-solver, it's helpful to remember that this is a process. Thus, if you want to improve your problem-solving skills, following these steps can help lead you to your solution:

  • Recognize that a problem exists . If you are facing a problem, there are generally signs. For instance, if you have a mental illness , you may experience excessive fear or sadness, mood changes, and changes in sleeping or eating habits. Recognizing these signs can help you realize that an issue exists.
  • Decide to solve the problem . Make a conscious decision to solve the issue at hand. Commit to yourself that you will go through the steps necessary to find a solution.
  • Seek to fully understand the issue . Analyze the problem you face, looking at it from all sides. If your problem is relationship-related, for instance, ask yourself how the other person may be interpreting the issue. You might also consider how your actions might be contributing to the situation.
  • Research potential options . Using the problem-solving strategies mentioned, research potential solutions. Make a list of options, then consider each one individually. What are some pros and cons of taking the available routes? What would you need to do to make them happen?
  • Take action . Select the best solution possible and take action. Action is one of the steps required for change . So, go through the motions needed to resolve the issue.
  • Try another option, if needed . If the solution you chose didn't work, don't give up. Either go through the problem-solving process again or simply try another option.

You can find a way to solve your problems as long as you keep working toward this goal—even if the best solution is simply to let go because no other good solution exists.

Sarathy V. Real world problem-solving .  Front Hum Neurosci . 2018;12:261. doi:10.3389/fnhum.2018.00261

Dunbar K. Problem solving . A Companion to Cognitive Science . 2017. doi:10.1002/9781405164535.ch20

Stewart SL, Celebre A, Hirdes JP, Poss JW. Risk of suicide and self-harm in kids: The development of an algorithm to identify high-risk individuals within the children's mental health system . Child Psychiat Human Develop . 2020;51:913-924. doi:10.1007/s10578-020-00968-9

Rosenbusch H, Soldner F, Evans AM, Zeelenberg M. Supervised machine learning methods in psychology: A practical introduction with annotated R code . Soc Personal Psychol Compass . 2021;15(2):e12579. doi:10.1111/spc3.12579

Mishra S. Decision-making under risk: Integrating perspectives from biology, economics, and psychology . Personal Soc Psychol Rev . 2014;18(3):280-307. doi:10.1177/1088868314530517

Csikszentmihalyi M, Sawyer K. Creative insight: The social dimension of a solitary moment . In: The Systems Model of Creativity . 2015:73-98. doi:10.1007/978-94-017-9085-7_7

Chrysikou EG, Motyka K, Nigro C, Yang SI, Thompson-Schill SL. Functional fixedness in creative thinking tasks depends on stimulus modality .  Psychol Aesthet Creat Arts . 2016;10(4):425‐435. doi:10.1037/aca0000050

Huang F, Tang S, Hu Z. Unconditional perseveration of the short-term mental set in chunk decomposition .  Front Psychol . 2018;9:2568. doi:10.3389/fpsyg.2018.02568

National Alliance on Mental Illness. Warning signs and symptoms .

Mayer RE. Thinking, problem solving, cognition, 2nd ed .

Schooler JW, Ohlsson S, Brooks K. Thoughts beyond words: When language overshadows insight. J Experiment Psychol: General . 1993;122:166-183. doi:10.1037/0096-3445.2.166

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Compound improved Harris hawks optimization for global and engineering optimization

  • Published: 24 April 2024

Cite this article

explain algorithms and heuristics as strategies of problem solving

  • Chengtian Ouyang 1 ,
  • Chang Liao 1 ,
  • Donglin Zhu 2 ,
  • Yangyang Zheng 2 ,
  • Changjun Zhou 2 &
  • Chengye Zou 3  

Meta-heuristic algorithms, due to their high search speed and strong generalization ability, are frequently applied in programs mainly to discover the corresponding optimal strategy for any problem in view of their defined rules. After years of collision evolution, they have been continuously used to solve the complex, unordered and diverse optimization problems and improve efficiency. Aiming at the problems of low convergence accuracy and easy to fall into local optima of the traditional Harris hawks optimization algorithm, a compound improved Harris Hawks Optimization algorithm (CIHHO) is proposed. Firstly, the early circling exploration and later attack exploitation phase of the dynamic adjustment algorithm for environmental factors is introduced to regulate the energy of Harris hawks; Secondly, the concept of Versoria function is introduced to modify the random jump strength and raise the data grabbing ability of local space; Introducing the Levy flight function to adjust the factor and reduce the disturbance impact of Levy flight is beneficial for getting rid of the local space after entering the exploitation phase, and introducing random white noise to reduce step size and improve algorithm accuracy. Taking CEC 2017 test function suite set as the core, the performance of CIHHO algorithm is analyzed. Firstly, the performance of CIHHO algorithm is compared with HHO, HHO_JOS, LHHO, LMHHO and NCHHO. Secondly, the performance of unimodal function, multimodal function, mixed function and compound function is compared with other 7 improved algorithms. Finally, ablation experiments are carried out. The convergence value of the iterative curve obtained is more quantitative than the improved algorithm, The generality of the improved CIHHO algorithm in solving multiple optimization problems with different dimensions is verified. Further applying the CIHHO algorithm to three different engineering experiments, the minimum cost calculation results directly demonstrate that the CIHHO algorithm obtained has certain advantages in dealing with search space problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

explain algorithms and heuristics as strategies of problem solving

Similar content being viewed by others

explain algorithms and heuristics as strategies of problem solving

Harris hawks optimizer based on the novice protection tournament for numerical and engineering optimization problems

explain algorithms and heuristics as strategies of problem solving

A hybrid algorithm of grey wolf optimizer and harris hawks optimization for solving global optimization problems with improved convergence performance

explain algorithms and heuristics as strategies of problem solving

IHHO: an improved Harris Hawks optimization algorithm for solving engineering problems

Data availability.

All data for this study are available from the corresponding author.

https://github.com/AndreasGuo/CIHHO.git

Li, Y., Li, F.: PSO-based growing echo state network. Appl. Soft Comput. J. 85 , 105774 (2019)

Article   Google Scholar  

Nitin, C., Sagar, S.: Performance analysis of GA, PSO and JA for determining the optimal parameters in friction drilling process. Eng. Sci. Technol. Int. J. 35 , 101246 (2022)

Google Scholar  

Wenyi, D., Juan, M., Wanjun, Y.: Orderly charging strategy of electric vehicle based on improved PSO algorithm. Energy 271 , 127088 (2023)

Baoye, S., Zidong, W., Lei, Z.: An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. J. 100 , 106960 (2021)

Ricardo, C., Rubén, G., Efrén, M.: Spectral richness PSO algorithm for parameter identification of dynamical systems under non-ideal excitation conditions. Appl. Soft Comput. J. 128 , 109490 (2022)

Ndunge, I.M., Maina, C.M., Irungu S.K.: Enhancing low voltage ride through capability of grid connected DFIG based WECS using WCA-PSO tuned STATCOM controller. Heliyon 8 (8) (2022)

Libin, H., Xinmeng, Y., Ben, W., et al.: An improved ensemble particle swarm optimizer using niching behavior and covariance matrix adapted retreat phase. Swarm Evolut. Comput. 78 , 101278 (2023)

Donglin, Z., Zuwei, H., Shikuang, L., et al.: Improved bare bones particle swarm optimization for DNA sequence design. IEEE Trans. Nanobiosci. (2022)

Wentao, M., Lihong, Q., Fengyuan, S., et al.: PV power forecasting based on relevance vector machine with sparrow search algorithm considering seasonal distribution and weather type. Energies 15 (14), 5231 (2022)

Chenglong, Z., Shifei, D.: A stochastic configuration network based on chaotic sparrow search algorithm. Knowl.-Based Syst. 220 , 106924 (2021)

Hao, W., Aihua, Z., Ying, H., et al.: Fast stochastic configuration network based on an improved sparrow search algorithm for fire flame recognition. Knowl.-Based Syst. 245 , 108626 (2022)

Zhen, Z., Yang, H.: Discrete sparrow search algorithm for symmetric traveling salesman problem. Appl. Soft Comput. J. 118 , 108469 (2022)

Jiale, H., Bo, S., Jiankai, X., et al.: A vector-encirclement-model-based sparrow search algorithm for engineering optimization and numerical optimization problems. Appl. Soft Comput. J. 131 , 109777 (2022)

Bin, L., Honglei, W.: Multi-objective sparrow search algorithm: A novel algorithm for solving complex multi-objective optimisation problems. Expert Syst. Appl. 210 , 118414 (2022)

Li, L.L., Xiong, J.L., Tseng, M.L., et al.: Using multi-objective sparrow search algorithm to establish active distribution network dynamic reconfiguration integrated optimization. Expert Syst. Appl. 193 , 116445 (2022). https://doi.org/10.1016/j.eswa.2021.116445

Zhu, D., et al.: Human memory optimization algorithm: a memory-inspired optimizer for global optimization problems. Expert Syst. Appl. 237 , 121597 (2023)

Xueliang, Z., Ying, C., Guanghua, W., et al.: A comprehensive compensation method for piezoresistive pressure sensor based on surface fitting and improved grey wolf algorithm. Measurement 207 , 112387 (2023)

Zhu, D., Wang, S., Huang, Z., et al.: A JAYA algorithm based on normal clouds for DNA sequence optimization. Clust. Comput. (2023). https://doi.org/10.1007/s10586-023-04083-x

Yujun, Z., Yufei, W., Shuijia, L., Fengjuan, Y., Liuwei, T., Yuxin, Y., Juan, Z., Zhengming, G.: An enhanced adaptive comprehensive learning hybrid algorithm of Rao-1 and JAYA algorithm for parameter extraction of photovoltaic models. Math. Biosci. Eng. 19 (6), 5610–5637 (2022)

Article   MathSciNet   Google Scholar  

Jiankai, X., Bo, S.: Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. J. Supercomput. 79 (7), 7305–7336 (2022)

Zhu, D., Wang, S., Zhou, C., et al.: Manta ray foraging optimization based on mechanics game and progressive learning for multiple optimization problems. Appl. Soft Comput. (2023). https://doi.org/10.1016/j.asoc.2023.110561

Xu, J., Li, T., Zhang, D., Wu, J.: Ensemble clustering via fusing global and local structure information. Expert Syst. Appl.121557, ISSN 0957-4174 (2023). https://doi.org/10.1016/j.eswa.2023.121557

Zhu, D., et al.: A multi-strategy particle swarm algorithm with exponential noise and fitness-distance balance method for low-altitude penetration in secure space. J. Comput. Sci. 74 , 102149 (2023)

Lei, L., Dong, Z., Fanhua, Y., et al.: Ant colony optimization with Cauchy and greedy Levy mutations for multilevel COVID 19 X-ray image segmentation. Comput. Biol. Med. 136 (prepublish), 104609 (2021)

Ming, Z.G., Juan, Z., Rong, Y.H., et al.: The challenge for the nature-inspired global optimization algorithms: non-symmetric benchmark functions. IEEE Access 9 , 106317–106339 (2021)

Morteza, P.K., Farshid, K., Amid, B.K.: OWMA: an improved self-regulatory woodpecker mating algorithm using opposition-basedlearning and allocation of local memory for solving optimization problems. J. Intell. Fuzzy Syst. 40 (1), 919–946 (2021)

Karimzadeh Parizi, M., Keynia, F., Bardsiri, A.K.: Woodpecker mating algorithm for optimal economic load dispatch in a power system with conventional generators. Int. J. Ind. Electron. Control Optim. 4 (2), 221–234 (2021)

Heidari, A.A., Mirjalili, S., Faris, H., et al.: Harris hawks optimization: algorithm and applications. Future Gener. Comput. Syst. 97 , 849–872 (2019)

Shangbin, J., Chen, W., Rui, G., et al.: Harris hawks optimization with multi-strategy search and application. Symmetry 13 (12), 2364 (2021)

Helei, K., Renyun, L., Yifei, Y., et al.: Improved Harris hawks optimization for non-convex function optimization and design optimization problems. Math. Comput. Simul 204 , 619–639 (2023)

Gupta, S., Deep, K., Heidari, A.A., et al.: Opposition-based learning Harris hawks optimization with advanced transition rules: principles and analysis. Expert Syst. Appl. 158 (prepublish), 113510 (2020)

Lemin, P., Zhennao, C., Ali, H.A., et al.: Hierarchical Harris hawks optimizer for feature selection. J. Adv. Res. 53 , 261–278 (2023)

Gharehchopogh, S.F., Abdollahzadeh, B.: An efficient harris hawk optimization algorithm for solving the travelling salesman problem. Clust. Comput. 25 (3), 1981–2005 (2021)

Pradeep, J., Asghar, A.H., Huiling, C.: Elitist non-dominated sorting Harris hawks optimization: framework and developments for multi-objective problems. Expert Syst. Appl. 186 , 115747 (2021)

Jafar, A.J., Amine, E.M.S.B., Sima, O., et al.: Reliability analysis based improved directional simulation using Harris hawks optimization algorithm for engineering systems. Eng. Fail. Anal. 135 (prepublish), 106148 (2022)

Mohamed, Z.E., Mohd, B.N.Y., Mohammad, T., et al.: An improved Harris hawks optimization algorithm with simulated annealing for feature selection in the medical field. IEEE Access 8 , 186638–186652 (2020)

Kashif, H., Nabil, N., William, Z., et al.: An efficient hybrid sine-cosine Harris hawks optimization for low and high-dimensional feature selection. Expert Syst. Appl. 176 , 114778 (2021)

Fahmy, H., El-Gendy, E.M., Mohamed, M.A., et al.: ECH3OA: an enhanced chimp-Harris hawks optimization algorithm for copyright protection in color images using watermarking techniques. Knowl.-Based Syst. 269 , 110494 (2023)

Xuxu, Z., Zhisheng, Y., Peng, C.: A hybrid optimization algorithm and its application in flight trajectory prediction. Expert Syst. Appl. 213 , 119082 (2023)

Ines, L., Aida, Z., Mohamed, Y., et al.: A novel improved binary Harris hawks optimization for high dimensionality feature selection. Pattern Recogn. Lett. 171 , 170–176 (2023)

Balaha, H.M., El-Gendy, E.M., Saafan, M.M.: CovH2SD: a COVID-19 detection approach based on Harris hawks Optimization and stacked deep learning. Expert Syst. Appl. 186 , 115805 (2021)

Hamed, A., Mohamed, M.F.: A feature selection framework for anxiety disorder analysis using a novel multiview harris hawk optimization algorithm[J]. Artif. Intell. Med. 143 , 102605 (2023)

Vineet, K., Veena, S., Ram, N.: Leader Harris hawks algorithm based optimal controller for automatic generation control in PV-hydro-wind integrated power network. Electric Power Syst. Res. 214 , 108924 (2023)

Kumar, A.P., Kumar, V.J., Jayalakshmi, N.: Real-time and day-ahead risk averse multi-objective operational scheduling of virtual power plant using modified Harris Hawk’s optimization. Electric Power Syst. Res. 220 , 109285 (2023)

ÇetınbaŞ, İ, Tamyürek, B., Demırtaş, M.: The hybrid Harris hawks optimizer-arithmetic optimization algorithm: a new hybrid algorithm for sizing optimization and design of microgrids. IEEE Access 10 , 19254–19283 (2022). https://doi.org/10.1109/ACCESS.2022.3151119

Kumar, A., Dhillon, J.S.: Enhanced Harris hawk optimizer for hydrothermal generation scheduling with cascaded reservoirs. Expert Syst. Appl. 226 , 120270 (2023)

Shiming, S., Pengjun, W., Asghar, A.H., et al.: Adaptive Harris hawks optimization with persistent trigonometric differences for photovoltaic model parameter extraction. Eng. Appl. Artif. Intell. 109 , 104608 (2022)

Jiao, S., Chong, G., Huang, C., et al.: Orthogonally adapted Harris hawks optimization for parameter estimation of photovoltaic models. Energy 203 , 117804 (2020)

Hassan, B., Muhammad, S., Özge, H., et al.: Decomposition and Harris hawks optimized multivariate wind speed forecasting utilizing sequence2sequence-based spatiotemporal attention. Energy 278 , 127933 (2023)

Ayşe, B., İdiris, D.: A new binary variant with transfer functions of Harris hawks optimization for binary wind turbine micrositing. Energy Rep. 6 (S9), 668–673 (2020)

Mingliang, L., Kegang, L., Qingci, Q.: A rockburst prediction model based on extreme learning machine with improved Harris hawks optimization and its application. Tunn. Underground Space Technol. Inc. Trenchless Technol. Res. 134 , 104978 (2023)

Mashaleh, A.S., Ibrahim, N.F., Al-Betar, M.A., et al.: Detecting spam email with machine learning optimized with Harris hawks optimizer (HHO) algorithm. Procedia Comput. Sci. 201 , 659–664 (2022)

Kamboj, K.V., Nandi, A., Bhadoria, A., et al.: An intensify Harris hawks optimizer for numerical and engineering optimization problems. Appl. Soft Comput. J. 89 , 106018 (2020)

İlker, G., Burcin, F.O.: Quantum particles-enhanced multiple Harris hawks swarms for dynamic optimization problems. Expert Syst. Appl. 16 (prepublish), 114202 (2020)

Sandeep, S., Abinash, S., Deba, S.P.: Improving accuracy of SVM for monthly sediment load prediction using Harris hawks optimization. Mater. Today: Proc. 61 , 604–617 (2022)

Yankai, W., Shilong, W., Wenhan, Y., et al.: A digital-twin-based adaptive multi-objective Harris hawks Optimizer for dynamic hybrid flow green scheduling problem with dynamic events. Appl. Soft Comput. J. 143 , 110274 (2023)

Sangeetha, J., Kumaran, U.: A hybrid optimization algorithm using BiLSTM structure for sentiment analysis. Meas.: Sens. 25 , 100619 (2023)

Alamir, M.A.: An enhanced artificial neural network model using the Harris hawks optimiser for predicting food liking in the presence of background noise. Appl. Acoust. 178 , 108022 (2021)

Awad, N.H., Ali, M.Z., Liang, J.J., Qu, B.Y., Suganthan, P.N.: Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective bound constrained real-parameter numerical optimization, technical report, Nanyang Technological University (2016)

Yuni, F.A., Sirapat, C., Chitsutha, S., et al.: Joint opposite selection (JOS): a premiere joint of selective leading opposition and dynamic opposite enhanced Harris’ hawks optimization for solving single-objective problems. Expert Syst. Appl. 188 , 116001 (2022)

Hussain, K., Zhu, W., Salleh, M.N.M.: Long-term memory Harris’ Hawk optimization for high dimensional and optimal power flow problems. IEEE Access 7 , 147596–147616 (2019)

Abdollahi, A.D., Safaa, A.S., Seyedali, M., et al.: Nonlinear-based Chaotic Harris hawks optimizer: algorithm and internet of vehicles application. Appl. Soft Comput. J. 109 , 107574 (2021)

Devan, P.A.M., Hussin, F.A., Ibrahim, R.B., Bingi, K., Nagarajapandian, M., Assaad, M.: An arithmetic-trigonometric optimization algorithm with application for control of real-time pressure process plant. Sensors 22 , 617 (2022)

Khodadadi, N., Snasel, V., Mirjalili, S.: Dynamic arithmetic optimization algorithm for truss optimization under natural frequency constraints. IEEE Access 10 , 16188–16208 (2022)

Priyadarshi, N., Bhaskar, M.S., Almakhles, D.: A novel hybrid whale optimization algorithm differential evolution algorithm-based maximum power point tracking employed wind energy conversion systems for water pumping applications: practical realization. IEEE Trans. Ind. Electron. 71 (2), 1641–1652 (2024)

Ahmed, R., Rangaiah, G.P., Mahadzir, S., Mirjalili, S., Hassan, M.H., Kamel, S.: Memory, evolutionary operator, and local search based improved Grey Wolf Optimizer with linear population size reduction technique. Knowl.-Based Syst. 264 , 110297 (2023)

Refaat, M.M., Aleem Shady, H.E.A., Atia, Y., Ali, Z.M., Sayed, M.M.: Multi-stage dynamic transmission network expansion planning using LSHADE-SPACMA. Appl. Sci. 11 (5), 2155 (2021)

Fathy, A., AbdelAleem, S.H.E., Rezk, H.: A novel approach for PEM fuel cell parameter estimation using LSHADE-EpSin optimization algorithm. Int. J. Energy Res. 45 (5), 6922–6942 (2020)

Nitish, C., Muhammad, A.M.: Golden jackal optimization: a novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 198 , 116924 (2022)

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (Nos. 62272418, 62102058), Basic public welfare research program of Zhejiang Province (No. LGG18E050011).

Funding were provided National Natural Science Foundation of China (Grant Nos. 62272418, 62102058) and Basic public welfare research program of Zhejiang Province (No. LGG18E050011).

Author information

Authors and affiliations.

School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, 341000, China

Chengtian Ouyang & Chang Liao

School of Computer Science and Technology, Zhejiang Normal University, Jinhua, 321004, China

Donglin Zhu, Yangyang Zheng & Changjun Zhou

College of Information Science and Engineering, Yanshan University, Qinhuangdao, 066004, China

Chengye Zou

You can also search for this author in PubMed   Google Scholar

Contributions

C.O.: Conceptualization, Methodology, Data curation, Writing- Original draft preparation Funding acquisition. C.L.: Conceptualization, Methodology, Software, Data curation, Writing- Original draft preparation.D.Z. : Visualization, Investigation. Y.Z.: Software, Methodology. C.Z.: Conceptualization,Supervision, Funding acquisition. C.Z.:Supervision, Visualization. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Changjun Zhou or Chengye Zou .

Ethics declarations

Conflict of interest.

The authors declare no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Tables  7 , 8 , 9 and 10 .

See Figs. 10 , 11 and 12 .

Subject to:

P = 6000LB; \({L}_{s}=14\) ; \({\text{E}}=30\times {1}^{6}\mathrm{ psi}\)

With bounds:

0.1 ≤  \({{\text{x}}}_{\mathrm{0,1}}\) , \({{\text{x}}}_{\mathrm{0,4}}\) ≤2,0.1 ≤  \({{\text{x}}}_{\mathrm{0,2}}\) , \({{\text{x}}}_{\mathrm{0,3}}\) ≤10.

0 ≤  \({{\text{x}}}_{\mathrm{1,1}}\) , \({{\text{x}}}_{\mathrm{1,2}}\) ≤ 100,10 ≤   \({{\text{x}}}_{\mathrm{1,3}}\) , \({{\text{x}}}_{\mathrm{1,4}}\) ≤ 200.

0 ≤  \({{\text{x}}}_{\mathrm{2,1}}\) , \({{\text{x}}}_{\mathrm{2,2}}\) ≤100.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Ouyang, C., Liao, C., Zhu, D. et al. Compound improved Harris hawks optimization for global and engineering optimization. Cluster Comput (2024). https://doi.org/10.1007/s10586-024-04348-z

Download citation

Received : 08 December 2023

Revised : 03 February 2024

Accepted : 07 February 2024

Published : 24 April 2024

DOI : https://doi.org/10.1007/s10586-024-04348-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Compound improved Harris hawks optimization
  • Environmental factor
  • Versoria function
  • Levy flight
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Heuristics

    explain algorithms and heuristics as strategies of problem solving

  2. DAA 1 7 Fundamentals of Algorithmic problem solving

    explain algorithms and heuristics as strategies of problem solving

  3. The Five Stages of Problem Solving Heuristic.

    explain algorithms and heuristics as strategies of problem solving

  4. heuristic problem solving nyu

    explain algorithms and heuristics as strategies of problem solving

  5. Problem solving infographic 10 steps concept Vector Image

    explain algorithms and heuristics as strategies of problem solving

  6. PPT

    explain algorithms and heuristics as strategies of problem solving

VIDEO

  1. Case Study

  2. Skill development,thinking skills, problem solving skills,critical thinking skill,change prospective

  3. what is the heuristic function in AI

  4. Problem solving techniques in AI

  5. Decision Making & Heuristic

  6. Clarifying the '5 Whys' Problem-Solving Method #shorts #problemsolving

COMMENTS

  1. 8.2 Problem-Solving: Heuristics and Algorithms

    Algorithms. In contrast to heuristics, which can be thought of as problem-solving strategies based on educated guesses, algorithms are problem-solving strategies that use rules. Algorithms are generally a logical set of steps that, if applied correctly, should be accurate. For example, you could make a cake using heuristics — relying on your ...

  2. Thought

    A problem-solving heuristic is an informal, intuitive, speculative procedure that leads to a solution in some cases but not in others. The fact that the outcome of applying a heuristic is unpredictable means that the strategy can be either more or less effective than using an algorithm.

  3. 8.2 Problem-Solving: Heuristics and Algorithms

    Algorithms. In contrast to heuristics, which can be thought of as problem-solving strategies based on educated guesses, algorithms are problem-solving strategies that use rules. Algorithms are generally a logical set of steps that, if applied correctly, should be accurate. For example, you could make a cake using heuristics — relying on your ...

  4. 7.3 Problem-Solving

    A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A "rule of thumb" is an example of a heuristic.

  5. The Algorithm Problem Solving Approach in Psychology

    In psychology, one of these problem-solving approaches is known as an algorithm. While often thought of purely as a mathematical term, the same type of process can be followed in psychology to find the correct answer when solving a problem or making a decision. An algorithm is a defined set of step-by-step procedures that provides the correct ...

  6. Heuristics & Algorithms. Strategies for Modern Problem-Solving

    The applications will be diverse and far-reaching, from environmental conservation strategies using predictive algorithms to social sciences employing heuristic models to understand human behavior. Autonomous Systems and Robotics : The advancements in heuristics and algorithms will significantly enhance the capabilities of autonomous systems ...

  7. 7.3 Problem Solving

    Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used? A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman ...

  8. Heuristics & approximate solutions

    Lesson 3: Solving hard problems. Using heuristics. Undecidable problems. Solving hard problems. ... One way to come up with approximate answers to a problem is to use a heuristic, a technique that guides an algorithm to find good choices. When an algorithm uses a heuristic, it no longer needs to exhaustively search every possible solution, so ...

  9. Heuristics and Problem Solving

    In contrast to heuristics, algorithms are step-by-step procedures to solve well-defined classes of tasks (e.g., long division algorithm, ... there is a strong need to explore how the comprehensive approach to problem solving in which heuristic strategies are integrated can be appropriately implemented and upscaled in educational practice.

  10. 4 Main problem-solving strategies

    Problem-solving strategies. These are operators that a problem solver tries to move from A to B. There are several problem-solving strategies but the main ones are: Algorithms; Heuristics; Trial and error; Insight; 1. Algorithms. When you follow a step-by-step procedure to solve a problem or reach a goal, you're using an algorithm.

  11. Problem-Solving Strategies: Definition and 5 Techniques to Try

    In insight problem-solving, the cognitive processes that help you solve a problem happen outside your conscious awareness. 4. Working backward. Working backward is a problem-solving approach often ...

  12. How to use algorithms to solve everyday problems

    My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms ...

  13. (PDF) Heuristics and Problem Solving

    Heuristics and Problem Solving: Definitions, Benefits, and Limitations. The term heuristic, from the Greek, means, "serving to find out or discover". (Todd and Gigerenzer, 2000, p. 738). In ...

  14. The Difference Between a Heuristic and an Algorithm

    The Difference Between a Heuristic and an Algorithm. 1. Introduction. In this tutorial, we'll discuss heuristics and algorithms, which are computer science concepts used in problem-solving, learning, and decision making. First, we'll give a detailed definition of each of the terms. Then we'll look at some examples.

  15. Problem Solving

    Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used? A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman ...

  16. Heuristics In Psychology: Definition & Examples

    Psychologists refer to these efficient problem-solving techniques as heuristics. A heuristic in psychology is a mental shortcut or rule of thumb that simplifies decision-making and problem-solving. Heuristics often speed up the process of finding a satisfactory solution, but they can also lead to cognitive biases.

  17. Algorithms and Heuristics as Strategies of Problem Solving

    In this article, we will Explain Algorithms and Heuristics as Strategies of Problem Solving. Explain Algorithms and Heuristics as Strategies of Problem Solving. Even with all the necessary knowledge and skills, success in problem-solving is not guaranteed. In addition to knowledge and skills, problem-solving requires a general strategy.

  18. Heuristic Problem Solving: A comprehensive guide with 5 Examples

    The four stages of heuristics in problem solving are as follows: 1. Understanding the problem: Identifying and defining the problem is the first step in the problem-solving process. 2. Generating solutions: The second step is to generate as many solutions as possible.

  19. A brief history of heuristics: how did research on heuristics evolve

    As heuristic problem-solving has often been contrasted with algorithmic problem-solving—even by Simon and Newell —it is worth recalling that the very notion of 'algorithm' was clarified ...

  20. Psych Ch 8: Intelligence Flashcards

    Study with Quizlet and memorize flashcards containing terms like Define cognition, and compare algorithms, heuristics, and insight as problem-solving strategies., Explain how confirmation bias, heuristics, fixation, and overconfidence can interfere with problem solving. Describe the effects of framing, belief perseverance, and intuition on our judgments and decision making., Intelligence and more.

  21. Heuristics: Intelligent Search Strategies for Computer Problem Solving

    In particular, he defines these techniques as "strategies using readily accessible though loosely applicable information to control problem-solving processes in human beings and machine(s)1 (p. vii). As such, Pearl's heuristics that can be used in some cases but not in every case, where their utility in any particular case could be diminished ...

  22. Problem Solving

    Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used? A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman ...

  23. Problem-Solving Strategies and Obstacles

    Problem-solving is a vital skill for coping with various challenges in life. This webpage explains the different strategies and obstacles that can affect how you solve problems, and offers tips on how to improve your problem-solving skills. Learn how to identify, analyze, and overcome problems with Verywell Mind.

  24. Compound improved Harris hawks optimization for global and ...

    Meta-heuristic algorithms, due to their high search speed and strong generalization ability, are frequently applied in programs mainly to discover the corresponding optimal strategy for any problem in view of their defined rules. After years of collision evolution, they have been continuously used to solve the complex, unordered and diverse optimization problems and improve efficiency. Aiming ...