Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

## Which program is right for you?

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Earn your MBA and SM in engineering with this transformative two-year program.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

An interdisciplinary program that combines engineering, management, and design, leading to a master’s degree in engineering and management.

## Executive Programs

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

This 20-month MBA program equips experienced executives to enhance their impact on their organizations and the world.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

Sam Altman thinks AI will change the world. All of it.

Categorical thinking can lead to investing errors

How storytelling helps data-driven teams succeed

Credit: Alejandro Giraldo

Ideas Made to Matter

## How to use algorithms to solve everyday problems

Kara Baskin

May 8, 2017

How can I navigate the grocery store quickly? Why doesn’t anyone like my Facebook status? How can I alphabetize my bookshelves in a hurry? Apple data visualizer and MIT System Design and Management graduate Ali Almossawi solves these common dilemmas and more in his new book, “ Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier ,” a quirky, illustrated guide to algorithmic thinking.

## For the uninitiated: What is an algorithm? And how can algorithms help us to think smarter?

An algorithm is a process with unambiguous steps that has a beginning and an end, and does something useful.

Algorithmic thinking is taking a step back and asking, “If it’s the case that algorithms are so useful in computing to achieve predictability, might they also be useful in everyday life, when it comes to, say, deciding between alternative ways of solving a problem or completing a task?” In all cases, we optimize for efficiency: We care about time or space.

Note the mention of “deciding between.” Computer scientists do that all the time, and I was convinced that the tools they use to evaluate competing algorithms would be of interest to a broad audience.

## Why did you write this book, and who can benefit from it?

All the books I came across that tried to introduce computer science involved coding. My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms, which form the basis for the field of computing and have far-reaching applications and uses.

I wrote the book with two audiences in mind. One, anyone, be it a learner or an educator, who is interested in computer science and wants an engaging and lighthearted, but not a dumbed-down, introduction to the field. Two, anyone who is already familiar with the field and wants to experience a way of explaining some of the fundamental concepts in computer science differently than how they’re taught.

## I’m going to the grocery store and only have 15 minutes. What do I do?

Do you know what the grocery store looks like ahead of time? If you know what it looks like, it determines your list. How do you prioritize things on your list? Order the items in a way that allows you to avoid walking down the same aisles twice.

For me, the intriguing thing is that the grocery store is a scene from everyday life that I can use as a launch pad to talk about various related topics, like priority queues and graphs and hashing. For instance, what is the most efficient way for a machine to store a prioritized list, and what happens when the equivalent of you scratching an item from a list happens in the machine’s list? How is a store analogous to a graph (an abstraction in computer science and mathematics that defines how things are connected), and how is navigating the aisles in a store analogous to traversing a graph?

## Nobody follows me on Instagram. How do I get more followers?

The concept of links and networks, which I cover in Chapter 6, is relevant here. It’s much easier to get to people whom you might be interested in and who might be interested in you if you can start within the ball of links that connects those people, rather than starting at a random spot.

You mention Instagram: There, the hashtag is one way to enter that ball of links. Tag your photos, engage with users who tag their photos with the same hashtags, and you should be on your way to stardom.

## What are the secret ingredients of a successful Facebook post?

I’ve posted things on social media that have died a sad death and then posted the same thing at a later date that somehow did great. Again, if we think of it in terms that are relevant to algorithms, we’d say that the challenge with making something go viral is really getting that first spark. And to get that first spark, a person who is connected to the largest number of people who are likely to engage with that post, needs to share it.

With [my first book], “Bad Arguments,” I spent a month pouring close to $5,000 into advertising for that project with moderate results. And then one science journalist with a large audience wrote about it, and the project took off and hasn’t stopped since.

## What problems do you wish you could solve via algorithm but can’t?

When we care about efficiency, thinking in terms of algorithms is useful. There are cases when that’s not the quality we want to optimize for — for instance, learning or love. I walk for several miles every day, all throughout the city, as I find it relaxing. I’ve never asked myself, “What’s the most efficient way I can traverse the streets of San Francisco?” It’s not relevant to my objective.

Algorithms are a great way of thinking about efficiency, but the question has to be, “What approach can you optimize for that objective?” That’s what worries me about self-help: Books give you a silver bullet for doing everything “right” but leave out all the nuances that make us different. What works for you might not work for me.

## Which companies use algorithms well?

When you read that the overwhelming majority of the shows that users of, say, Netflix, watch are due to Netflix’s recommendation engine, you know they’re doing something right.

## Related Articles

## DEV Community

Posted on Nov 8, 2023 • Updated on Nov 9, 2023

## Introduction to Algorithms: What Every Beginner Should Know

Algorithms are the beating heart of computer science and programming. They are the step-by-step instructions that computers follow to solve problems and perform tasks. Whether you're a beginner or an aspiring programmer, understanding the fundamentals of algorithms is essential. In this blog post, we will introduce you to the world of algorithms, what they are, why they matter, and the key concepts every beginner should know.

## What is an Algorithm?

At its core, an algorithm is a finite set of well-defined instructions for solving a specific problem or task. These instructions are executed in a predetermined sequence, often designed to transform input data into a desired output.

Think of algorithms as recipes: just as a recipe tells you how to prepare a meal, an algorithm tells a computer how to perform a particular task. But instead of cooking a delicious dish, you're instructing a computer to perform a specific computation or solve a problem.

## Why Do Algorithms Matter?

Algorithms play a pivotal role in the world of computing for several reasons:

Efficiency: Well-designed algorithms can execute tasks quickly and use minimal resources, which is crucial in applications like search engines and real-time systems.

Problem Solving: Algorithms provide structured approaches to problem-solving and help us tackle complex issues more effectively.

Reusability: Once developed, algorithms can be used in various applications, promoting code reuse and reducing redundancy.

Scalability: Algorithms are essential for handling large datasets and growing computational needs in modern software.

## Key Concepts in Algorithm Design

1. input and output.

Input: Algorithms take input data as their starting point. This could be a list of numbers, a text document, or any other data structure.

Output: Algorithms produce a result or output based on the input. This could be a sorted list, a specific calculation, or a solution to a problem.

## 2. Correctness

An algorithm is considered correct if it produces the desired output for all valid inputs. Ensuring correctness is a fundamental aspect of algorithm design and testing.

## 3. Efficiency

Efficiency is a critical consideration in algorithm design. It's about finding the most optimal way to solve a problem, which often involves minimizing the consumption of time and resources.

## 4. Scalability

Algorithms should be designed to handle input data of varying sizes efficiently. Scalable algorithms can adapt to growing data requirements without a significant decrease in performance.

## 5. Time Complexity and Space Complexity

Time complexity refers to the amount of time an algorithm takes to complete based on the size of the input. Space complexity concerns the amount of memory an algorithm requires. Understanding these complexities helps assess an algorithm's efficiency.

## Examples of Common Algorithms

Sorting Algorithms: Algorithms like "Bubble Sort," "Quick Sort," and "Merge Sort" rearrange a list of items into a specific order.

Search Algorithms: Algorithms like "Binary Search" efficiently find a specific item in a sorted list.

Graph Algorithms: Algorithms like "Breadth-First Search" and "Dijkstra's Algorithm" navigate through networks and find optimal routes.

Dynamic Programming: Algorithms like the "Fibonacci Sequence" solver use a technique called dynamic programming to optimize problem-solving.

Algorithms are the building blocks of computer science and programming. While they might sound complex, they are at the heart of everything from internet search engines to your favorite social media platforms. As a beginner, understanding the basics of algorithms and their importance is a significant step in your coding journey.

As you dive into the world of algorithms, you'll discover that they can be both fascinating and challenging. Don't be discouraged by complex problems; instead, view them as opportunities to develop your problem-solving skills. Whether you're aiming to become a software developer, data scientist, or just a more proficient programmer, a solid grasp of algorithms is an invaluable asset. Happy coding! 🚀💡

Also, I have to tell you, this article was partially generated with the help of ChatGPT!!!

## Top comments (1)

Templates let you quickly answer FAQs or store snippets for re-use.

- Location Bangkok 🇹🇭
- Joined Jun 1, 2018

Hi there. This post reads a lot like it was generated or strongly assisted by AI. If so, please consider amending it to comply with the DEV.to guidelines concerning such content...

From " The DEV Community Guidelines for AI-Assisted and -Generated Articles ":

AI-assisted and -generated articles should… Be created and published in good faith, meaning with honest, sincere, and harmless intentions. Disclose the fact that they were generated or assisted by AI in the post, either upfront using the tag #ABotWroteThis or at any point in the article’s copy (including right at the end). - For example, a conclusion that states “Surprise, this article was generated by ChatGPT!” or the disclaimer “This article was created with the help of AI” would be appropriate. Ideally add something to the conversation regarding AI and its capabilities. Tell us your story of using the tool to create content, and why!

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

## Redis Pipeline: The Way of Sending Redis Commands in One Shot

Nightsilver Academy - May 5

## Git is Elegant

Ron Newcomb - May 4

## Access your Synology NAS with a custom domain on Bunny.net (DDNS)

Steeve - May 4

## How do you use git between devices?

Doug Bridgens - May 7

We're a place where coders share, stay up-to-date and grow their careers.

- school Campus Bookshelves
- menu_book Bookshelves
- perm_media Learning Objects
- login Login
- how_to_reg Request Instructor Account
- hub Instructor Commons
- Download Page (PDF)
- Download Full Book (PDF)
- Periodic Table
- Physics Constants
- Scientific Calculator
- Reference & Cite
- Tools expand_more
- Readability

selected template will load here

This action is not available.

## 1: Algorithmic Problem Solving

- Last updated
- Save as PDF
- Page ID 46789

- Harrison Njoroge
- African Virtual University

## Unit Objectives

Upon completion of this unit the learner should be able to:

- describe an algorithm
- explain the relationship between data and algorithm
- outline the characteristics of algorithms
- apply pseudo codes and flowcharts to represent algorithms

## Unit Introduction

This unit introduces learners to data structures and algorithm course. The unit is on the different data structures and their algorithms that can help implement the different data structures in the computer. The application of the different data structures is presented by using examples of algorithms and which are not confined to a particular computer programming language.

- Data: the structural representation of logical relationships between elements of data
- Algorithm: finite sequence of steps for accomplishing some computational task
- Pseudo code: an informal high-level description of the operating principle of a computer program or other algorithm
- Flow chart: diagrammatic representation illustrates a solution model to a given problem.

## Learning Activities

- 1.1: Activity 1 - Introduction to Algorithms and Problem Solving In this learning activity section, the learner will be introduced to algorithms and how to write algorithms to solve tasks faced by learners or everyday problems. Examples of the algorithm are also provided with a specific application to everyday problems that the learner is familiar with. The learners will particularly learn what is an algorithm, the process of developing a solution for a given task, and finally examples of application of the algorithms are given.
- 1.2: Activity 2 - The characteristics of an algorithm This section introduces the learners to the characteristics of algorithms. These characteristics make the learner become aware of what to ensure is basic, present and mandatory for any algorithm to qualify to be one. It also exposes the learner to what to expect from an algorithm to achieve or indicate. Key expectations are: the fact that an algorithm must be exact, terminate, effective, general among others.
- 1.3: Activity 3 - Using pseudo-codes and flowcharts to represent algorithms The student will learn how to design an algorithm using either a pseudo code or flowchart. Pseudo code is a mixture of English like statements, some mathematical notations and selected keywords from a programming language. It is one of the tools used to design and develop the solution to a task or problem. Pseudo codes have different ways of representing the same thing and emphasis is on the clarity and not style.
- 1.4: Unit Summary In this unit, you have seen what an algorithm is. Based on this knowledge, you should now be able to characterize an algorithm by stating its properties. We have explored the different ways of representing an algorithm such as using human language, pseudo codes and flow chart. You should now be able to present solutions to problems in form of an algorithm.

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

## Unit 1: Algorithms

About this unit.

We've partnered with Dartmouth college professors Tom Cormen and Devin Balkcom to teach introductory computer science algorithms, including searching, sorting, recursion, and graph theory. Learn with a combination of articles, visualizations, quizzes, and coding challenges.

## Intro to algorithms

- What is an algorithm and why should you care? (Opens a modal)
- A guessing game (Opens a modal)
- Route-finding (Opens a modal)
- Discuss: Algorithms in your life (Opens a modal)

## Binary search

- Binary search (Opens a modal)
- Implementing binary search of an array (Opens a modal)
- Challenge: Binary search (Opens a modal)
- Running time of binary search (Opens a modal)
- Running time of binary search 5 questions Practice

## Asymptotic notation

- Asymptotic notation (Opens a modal)
- Big-θ (Big-Theta) notation (Opens a modal)
- Functions in asymptotic notation (Opens a modal)
- Big-O notation (Opens a modal)
- Big-Ω (Big-Omega) notation (Opens a modal)
- Comparing function growth 4 questions Practice
- Asymptotic notation 5 questions Practice

## Selection sort

- Sorting (Opens a modal)
- Challenge: implement swap (Opens a modal)
- Selection sort pseudocode (Opens a modal)
- Challenge: Find minimum in subarray (Opens a modal)
- Challenge: implement selection sort (Opens a modal)
- Analysis of selection sort (Opens a modal)
- Project: Selection sort visualizer (Opens a modal)

## Insertion sort

- Insertion sort (Opens a modal)
- Challenge: implement insert (Opens a modal)
- Insertion sort pseudocode (Opens a modal)
- Challenge: Implement insertion sort (Opens a modal)
- Analysis of insertion sort (Opens a modal)

## Recursive algorithms

- Recursion (Opens a modal)
- The factorial function (Opens a modal)
- Challenge: Iterative factorial (Opens a modal)
- Recursive factorial (Opens a modal)
- Challenge: Recursive factorial (Opens a modal)
- Properties of recursive algorithms (Opens a modal)
- Using recursion to determine whether a word is a palindrome (Opens a modal)
- Challenge: is a string a palindrome? (Opens a modal)
- Computing powers of a number (Opens a modal)
- Challenge: Recursive powers (Opens a modal)
- Multiple recursion with the Sierpinski gasket (Opens a modal)
- Improving efficiency of recursive functions (Opens a modal)
- Project: Recursive art (Opens a modal)

## Towers of Hanoi

- Towers of Hanoi (Opens a modal)
- Towers of Hanoi, continued (Opens a modal)
- Challenge: Solve Hanoi recursively (Opens a modal)
- Move three disks in Towers of Hanoi 3 questions Practice
- Divide and conquer algorithms (Opens a modal)
- Overview of merge sort (Opens a modal)
- Challenge: Implement merge sort (Opens a modal)
- Linear-time merging (Opens a modal)
- Challenge: Implement merge (Opens a modal)
- Analysis of merge sort (Opens a modal)
- Overview of quicksort (Opens a modal)
- Challenge: Implement quicksort (Opens a modal)
- Linear-time partitioning (Opens a modal)
- Challenge: Implement partition (Opens a modal)
- Analysis of quicksort (Opens a modal)

## Graph representation

- Describing graphs (Opens a modal)
- Representing graphs (Opens a modal)
- Challenge: Store a graph (Opens a modal)
- Describing graphs 6 questions Practice
- Representing graphs 5 questions Practice

## Breadth-first search

- Breadth-first search and its uses (Opens a modal)
- The breadth-first search algorithm (Opens a modal)
- Challenge: Implement breadth-first search (Opens a modal)
- Analysis of breadth-first search (Opens a modal)

## Further learning

- Where to go from here (Opens a modal)

- School Guide
- Class 8 Syllabus
- Maths Notes Class 8
- Science Notes Class 8
- History Notes Class 8
- Geography Notes Class 8
- Civics Notes Class 8
- NCERT Soln. Class 8 Maths
- RD Sharma Soln. Class 8
- Math Formulas Class 8

## How to Use Algorithms to Solve Problems?

- How to use Chat-GPT to solve Coding Problems?
- Wicked Problems and How to Solve Them?
- How to implement Genetic Algorithm using PyTorch
- Best Data Structures and Algorithms Books
- How to improve your DSA skills?
- Tricks To Solve Age-Based Problems
- The Role of Algorithms in Computing
- Most important type of Algorithms
- How to Identify & Solve Binary Search Problems?
- Quiz on Algorithms | DSA MCQs
- Introduction to Beam Search Algorithm
- Algorithms Quiz | Sudo Placement [1.8] | Question 5
- Top 50 Problems on Recursion Algorithm asked in SDE Interviews
- What are Logical Puzzles And How to Solve them?
- What is Algorithm | Introduction to Algorithms
- Top 10 Algorithms in Interview Questions
- Data Structures & Algorithms Guide for Developers
- Top 50 Binary Search Tree Coding Problems for Interviews
- Top 20 Greedy Algorithms Interview Questions

An algorithm is a process or set of rules which must be followed to complete a particular task. This is basically the step-by-step procedure to complete any task. All the tasks are followed a particular algorithm, from making a cup of tea to make high scalable software. This is the way to divide a task into several parts. If we draw an algorithm to complete a task then the task will be easier to complete.

The algorithm is used for,

- To develop a framework for instructing computers.
- Introduced notation of basic functions to perform basic tasks.
- For defining and describing a big problem in small parts, so that it is very easy to execute.

Characteristics of Algorithm

- An algorithm should be defined clearly.
- An algorithm should produce at least one output.
- An algorithm should have zero or more inputs.
- An algorithm should be executed and finished in finite number of steps.
- An algorithm should be basic and easy to perform.
- Each step started with a specific indentation like, “Step-1”,
- There must be “Start” as the first step and “End” as the last step of the algorithm.

Let’s take an example to make a cup of tea,

Step 1: Start

Step 2: Take some water in a bowl.

Step 3: Put the water on a gas burner .

Step 4: Turn on the gas burner

Step 5: Wait for some time until the water is boiled.

Step 6: Add some tea leaves to the water according to the requirement.

Step 7: Then again wait for some time until the water is getting colorful as tea.

Step 8: Then add some sugar according to taste.

Step 9: Again wait for some time until the sugar is melted.

Step 10: Turn off the gas burner and serve the tea in cups with biscuits.

Step 11: End

Here is an algorithm for making a cup of tea. This is the same for computer science problems.

There are some basics steps to make an algorithm:

- Start – Start the algorithm
- Input – Take the input for values in which the algorithm will execute.
- Conditions – Perform some conditions on the inputs to get the desired output.
- Output – Printing the outputs.
- End – End the execution.

Let’s take some examples of algorithms for computer science problems.

Example 1. Swap two numbers with a third variable

Step 1: Start Step 2: Take 2 numbers as input. Step 3: Declare another variable as “temp”. Step 4: Store the first variable to “temp”. Step 5: Store the second variable to the First variable. Step 6: Store the “temp” variable to the 2nd variable. Step 7: Print the First and second variables. Step 8: End

Example 2. Find the area of a rectangle

Step 1: Start Step 2: Take the Height and Width of the rectangle as input. Step 3: Declare a variable as “area” Step 4: Multiply Height and Width Step 5: Store the multiplication to “Area”, (its look like area = Height x Width) Step 6: Print “area”; Step 7: End

Example 3. Find the greatest between 3 numbers.

Step 1: Start Step 2: Take 3 numbers as input, say A, B, and C. Step 3: Check if(A>B and A>C) Step 4: Then A is greater Step 5: Print A Step 6 : Else Step 7: Check if(B>A and B>C) Step 8: Then B is greater Step 9: Print B Step 10: Else C is greater Step 11 : Print C Step 12: End

Advantages of Algorithm

- An algorithm uses a definite procedure.
- It is easy to understand because it is a step-by-step definition.
- The algorithm is easy to debug if there is any error happens.
- It is not dependent on any programming language
- It is easier for a programmer to convert it into an actual program because the algorithm divides a problem into smaller parts.

Disadvantages of Algorithms

- An algorithm is Time-consuming, there is specific time complexity for different algorithms.
- Large tasks are difficult to solve in Algorithms because the time complexity may be higher, so programmers have to find a good efficient way to solve that task.
- Looping and branching are difficult to define in algorithms.

## Please Login to comment...

Similar reads.

- School Learning
- School Programming

## Improve your Coding Skills with Practice

## What kind of Experience do you want to share?

- 1. Micro-Worlds
- 2. Light-Bot in Java
- 3. Jeroos of Santong Island
- 4. Problem Solving and Algorithms
- 5. Creating Jeroo Methods
- 6. Conditionally Executing Actions
- 7. Repeating Actions
- 8. Handling Touch Events
- 9. Adding Text to the Screen

## Problem Solving and Algorithms

Learn a basic process for developing a solution to a problem. Nothing in this chapter is unique to using a computer to solve a problem. This process can be used to solve a wide variety of problems, including ones that have nothing to do with computers.

## Problems, Solutions, and Tools

I have a problem! I need to thank Aunt Kay for the birthday present she sent me. I could send a thank you note through the mail. I could call her on the telephone. I could send her an email message. I could drive to her house and thank her in person. In fact, there are many ways I could thank her, but that's not the point. The point is that I must decide how I want to solve the problem, and use the appropriate tool to implement (carry out) my plan. The postal service, the telephone, the internet, and my automobile are tools that I can use, but none of these actually solves my problem. In a similar way, a computer does not solve problems, it's just a tool that I can use to implement my plan for solving the problem.

Knowing that Aunt Kay appreciates creative and unusual things, I have decided to hire a singing messenger to deliver my thanks. In this context, the messenger is a tool, but one that needs instructions from me. I have to tell the messenger where Aunt Kay lives, what time I would like the message to be delivered, and what lyrics I want sung. A computer program is similar to my instructions to the messenger.

The story of Aunt Kay uses a familiar context to set the stage for a useful point of view concerning computers and computer programs. The following list summarizes the key aspects of this point of view.

A computer is a tool that can be used to implement a plan for solving a problem.

A computer program is a set of instructions for a computer. These instructions describe the steps that the computer must follow to implement a plan.

An algorithm is a plan for solving a problem.

A person must design an algorithm.

A person must translate an algorithm into a computer program.

This point of view sets the stage for a process that we will use to develop solutions to Jeroo problems. The basic process is important because it can be used to solve a wide variety of problems, including ones where the solution will be written in some other programming language.

## An Algorithm Development Process

Every problem solution starts with a plan. That plan is called an algorithm.

There are many ways to write an algorithm. Some are very informal, some are quite formal and mathematical in nature, and some are quite graphical. The instructions for connecting a DVD player to a television are an algorithm. A mathematical formula such as πR 2 is a special case of an algorithm. The form is not particularly important as long as it provides a good way to describe and check the logic of the plan.

The development of an algorithm (a plan) is a key step in solving a problem. Once we have an algorithm, we can translate it into a computer program in some programming language. Our algorithm development process consists of five major steps.

## Step 1: Obtain a description of the problem.

Step 2: analyze the problem., step 3: develop a high-level algorithm., step 4: refine the algorithm by adding more detail., step 5: review the algorithm..

This step is much more difficult than it appears. In the following discussion, the word client refers to someone who wants to find a solution to a problem, and the word developer refers to someone who finds a way to solve the problem. The developer must create an algorithm that will solve the client's problem.

The client is responsible for creating a description of the problem, but this is often the weakest part of the process. It's quite common for a problem description to suffer from one or more of the following types of defects: (1) the description relies on unstated assumptions, (2) the description is ambiguous, (3) the description is incomplete, or (4) the description has internal contradictions. These defects are seldom due to carelessness by the client. Instead, they are due to the fact that natural languages (English, French, Korean, etc.) are rather imprecise. Part of the developer's responsibility is to identify defects in the description of a problem, and to work with the client to remedy those defects.

The purpose of this step is to determine both the starting and ending points for solving the problem. This process is analogous to a mathematician determining what is given and what must be proven. A good problem description makes it easier to perform this step.

When determining the starting point, we should start by seeking answers to the following questions:

What data are available?

Where is that data?

What formulas pertain to the problem?

What rules exist for working with the data?

What relationships exist among the data values?

When determining the ending point, we need to describe the characteristics of a solution. In other words, how will we know when we're done? Asking the following questions often helps to determine the ending point.

What new facts will we have?

What items will have changed?

What changes will have been made to those items?

What things will no longer exist?

An algorithm is a plan for solving a problem, but plans come in several levels of detail. It's usually better to start with a high-level algorithm that includes the major part of a solution, but leaves the details until later. We can use an everyday example to demonstrate a high-level algorithm.

Problem: I need a send a birthday card to my brother, Mark.

Analysis: I don't have a card. I prefer to buy a card rather than make one myself.

High-level algorithm:

Go to a store that sells greeting cards Select a card Purchase a card Mail the card

This algorithm is satisfactory for daily use, but it lacks details that would have to be added were a computer to carry out the solution. These details include answers to questions such as the following.

"Which store will I visit?"

"How will I get there: walk, drive, ride my bicycle, take the bus?"

"What kind of card does Mark like: humorous, sentimental, risqué?"

These kinds of details are considered in the next step of our process.

A high-level algorithm shows the major steps that need to be followed to solve a problem. Now we need to add details to these steps, but how much detail should we add? Unfortunately, the answer to this question depends on the situation. We have to consider who (or what) is going to implement the algorithm and how much that person (or thing) already knows how to do. If someone is going to purchase Mark's birthday card on my behalf, my instructions have to be adapted to whether or not that person is familiar with the stores in the community and how well the purchaser known my brother's taste in greeting cards.

When our goal is to develop algorithms that will lead to computer programs, we need to consider the capabilities of the computer and provide enough detail so that someone else could use our algorithm to write a computer program that follows the steps in our algorithm. As with the birthday card problem, we need to adjust the level of detail to match the ability of the programmer. When in doubt, or when you are learning, it is better to have too much detail than to have too little.

Most of our examples will move from a high-level to a detailed algorithm in a single step, but this is not always reasonable. For larger, more complex problems, it is common to go through this process several times, developing intermediate level algorithms as we go. Each time, we add more detail to the previous algorithm, stopping when we see no benefit to further refinement. This technique of gradually working from a high-level to a detailed algorithm is often called stepwise refinement .

The final step is to review the algorithm. What are we looking for? First, we need to work through the algorithm step by step to determine whether or not it will solve the original problem. Once we are satisfied that the algorithm does provide a solution to the problem, we start to look for other things. The following questions are typical of ones that should be asked whenever we review an algorithm. Asking these questions and seeking their answers is a good way to develop skills that can be applied to the next problem.

Does this algorithm solve a very specific problem or does it solve a more general problem ? If it solves a very specific problem, should it be generalized?

For example, an algorithm that computes the area of a circle having radius 5.2 meters (formula π*5.2 2 ) solves a very specific problem, but an algorithm that computes the area of any circle (formula π*R 2 ) solves a more general problem.

Can this algorithm be simplified ?

One formula for computing the perimeter of a rectangle is:

length + width + length + width

A simpler formula would be:

2.0 * ( length + width )

Is this solution similar to the solution to another problem? How are they alike? How are they different?

For example, consider the following two formulae:

Rectangle area = length * width Triangle area = 0.5 * base * height

Similarities: Each computes an area. Each multiplies two measurements.

Differences: Different measurements are used. The triangle formula contains 0.5.

Hypothesis: Perhaps every area formula involves multiplying two measurements.

## Example 4.1: Pick and Plant

This section contains an extended example that demonstrates the algorithm development process. To complete the algorithm, we need to know that every Jeroo can hop forward, turn left and right, pick a flower from its current location, and plant a flower at its current location.

## Problem Statement (Step 1)

A Jeroo starts at (0, 0) facing East with no flowers in its pouch. There is a flower at location (3, 0). Write a program that directs the Jeroo to pick the flower and plant it at location (3, 2). After planting the flower, the Jeroo should hop one space East and stop. There are no other nets, flowers, or Jeroos on the island.

## Analysis of the Problem (Step 2)

The flower is exactly three spaces ahead of the jeroo.

The flower is to be planted exactly two spaces South of its current location.

The Jeroo is to finish facing East one space East of the planted flower.

There are no nets to worry about.

## High-level Algorithm (Step 3)

Let's name the Jeroo Bobby. Bobby should do the following:

Get the flower Put the flower Hop East

## Detailed Algorithm (Step 4)

Get the flower Hop 3 times Pick the flower Put the flower Turn right Hop 2 times Plant a flower Hop East Turn left Hop once

## Review the Algorithm (Step 5)

The high-level algorithm partitioned the problem into three rather easy subproblems. This seems like a good technique.

This algorithm solves a very specific problem because the Jeroo and the flower are in very specific locations.

This algorithm is actually a solution to a slightly more general problem in which the Jeroo starts anywhere, and the flower is 3 spaces directly ahead of the Jeroo.

## Java Code for "Pick and Plant"

A good programmer doesn't write a program all at once. Instead, the programmer will write and test the program in a series of builds. Each build adds to the previous one. The high-level algorithm will guide us in this process.

## FIRST BUILD

To see this solution in action, create a new Greenfoot4Sofia scenario and use the Edit Palettes Jeroo menu command to make the Jeroo classes visible. Right-click on the Island class and create a new subclass with the name of your choice. This subclass will hold your new code.

The recommended first build contains three things:

The main method (here myProgram() in your island subclass).

Declaration and instantiation of every Jeroo that will be used.

The high-level algorithm in the form of comments.

The instantiation at the beginning of myProgram() places bobby at (0, 0), facing East, with no flowers.

Once the first build is working correctly, we can proceed to the others. In this case, each build will correspond to one step in the high-level algorithm. It may seem like a lot of work to use four builds for such a simple program, but doing so helps establish habits that will become invaluable as the programs become more complex.

## SECOND BUILD

This build adds the logic to "get the flower", which in the detailed algorithm (step 4 above) consists of hopping 3 times and then picking the flower. The new code is indicated by comments that wouldn't appear in the original (they are just here to call attention to the additions). The blank lines help show the organization of the logic.

By taking a moment to run the work so far, you can confirm whether or not this step in the planned algorithm works as expected.

## THIRD BUILD

This build adds the logic to "put the flower". New code is indicated by the comments that are provided here to mark the additions.

## FOURTH BUILD (final)

Example 4.2: replace net with flower.

This section contains a second example that demonstrates the algorithm development process.

There are two Jeroos. One Jeroo starts at (0, 0) facing North with one flower in its pouch. The second starts at (0, 2) facing East with one flower in its pouch. There is a net at location (3, 2). Write a program that directs the first Jeroo to give its flower to the second one. After receiving the flower, the second Jeroo must disable the net, and plant a flower in its place. After planting the flower, the Jeroo must turn and face South. There are no other nets, flowers, or Jeroos on the island.

Jeroo_2 is exactly two spaces behind Jeroo_1.

The only net is exactly three spaces ahead of Jeroo_2.

Each Jeroo has exactly one flower.

Jeroo_2 will have two flowers after receiving one from Jeroo_1. One flower must be used to disable the net. The other flower must be planted at the location of the net, i.e. (3, 2).

Jeroo_1 will finish at (0, 1) facing South.

Jeroo_2 is to finish at (3, 2) facing South.

Each Jeroo will finish with 0 flowers in its pouch. One flower was used to disable the net, and the other was planted.

Let's name the first Jeroo Ann and the second one Andy.

Ann should do the following: Find Andy (but don't collide with him) Give a flower to Andy (he will be straight ahead) After receiving the flower, Andy should do the following: Find the net (but don't hop onto it) Disable the net Plant a flower at the location of the net Face South

Ann should do the following: Find Andy Turn around (either left or right twice) Hop (to location (0, 1)) Give a flower to Andy Give ahead Now Andy should do the following: Find the net Hop twice (to location (2, 2)) Disable the net Toss Plant a flower at the location of the net Hop (to location (3, 2)) Plant a flower Face South Turn right

The high-level algorithm helps manage the details.

This algorithm solves a very specific problem, but the specific locations are not important. The only thing that is important is the starting location of the Jeroos relative to one another and the location of the net relative to the second Jeroo's location and direction.

## Java Code for "Replace Net with Flower"

As before, the code should be written incrementally as a series of builds. Four builds will be suitable for this problem. As usual, the first build will contain the main method, the declaration and instantiation of the Jeroo objects, and the high-level algorithm in the form of comments. The second build will have Ann give her flower to Andy. The third build will have Andy locate and disable the net. In the final build, Andy will place the flower and turn East.

This build creates the main method, instantiates the Jeroos, and outlines the high-level algorithm. In this example, the main method would be myProgram() contained within a subclass of Island .

This build adds the logic for Ann to locate Andy and give him a flower.

This build adds the logic for Andy to locate and disable the net.

This build adds the logic for Andy to place a flower at (3, 2) and turn South.

## Solve Me First Easy Problem Solving (Basic) Max Score: 1 Success Rate: 97.80%

Simple array sum easy problem solving (basic) max score: 10 success rate: 94.50%, compare the triplets easy problem solving (basic) max score: 10 success rate: 95.79%, a very big sum easy problem solving (basic) max score: 10 success rate: 98.81%, diagonal difference easy problem solving (basic) max score: 10 success rate: 96.00%, plus minus easy problem solving (basic) max score: 10 success rate: 98.38%, staircase easy problem solving (basic) max score: 10 success rate: 98.36%, mini-max sum easy problem solving (basic) max score: 10 success rate: 94.43%, birthday cake candles easy problem solving (basic) max score: 10 success rate: 97.12%, time conversion easy problem solving (basic) max score: 15 success rate: 92.31%, cookie support is required to access hackerrank.

Seems like cookies are disabled on this browser, please enable them to open this website

## What is Problem Solving Algorithm?, Steps, Representation

- Post author: Disha Singh
- Post published: 6 June 2021
- Post category: Computer Science
- Post comments: 0 Comments

Table of Contents

- 1 What is Problem Solving Algorithm?
- 2 Definition of Problem Solving Algorithm
- 3.1 Analysing the Problem
- 3.2 Developing an Algorithm
- 3.4 Testing and Debugging
- 4.1 Flowchart
- 4.2 Pseudo code

## What is Problem Solving Algorithm?

Computers are used for solving various day-to-day problems and thus problem solving is an essential skill that a computer science student should know. It is pertinent to mention that computers themselves cannot solve a problem. Precise step-by-step instructions should be given by us to solve the problem.

Thus, the success of a computer in solving a problem depends on how correctly and precisely we define the problem, design a solution (algorithm) and implement the solution (program) using a programming language.

Thus, problem solving is the process of identifying a problem, developing an algorithm for the identified problem and finally implementing the algorithm to develop a computer program.

## Definition of Problem Solving Algorithm

These are some simple definition of problem solving algorithm which given below:

## Steps for Problem Solving

When problems are straightforward and easy, we can easily find the solution. But a complex problem requires a methodical approach to find the right solution. In other words, we have to apply problem solving techniques.

Problem solving begins with the precise identification of the problem and ends with a complete working solution in terms of a program or software. Key steps required for solving a problem using a computer.

For Example: Suppose while driving, a vehicle starts making a strange noise. We might not know how to solve the problem right away. First, we need to identify from where the noise is coming? In case the problem cannot be solved by us, then we need to take the vehicle to a mechanic.

The mechanic will analyse the problem to identify the source of the noise, make a plan about the work to be done and finally repair the vehicle in order to remove the noise. From the example, it is explicit that, finding the solution to a problem might consist of multiple steps.

Following are Steps for Problem Solving :

## Analysing the Problem

Developing an algorithm, testing and debugging.

It is important to clearly understand a problem before we begin to find the solution for it. If we are not clear as to what is to be solved, we may end up developing a program which may not solve our purpose.

Thus, we need to read and analyse the problem statement carefully in order to list the principal components of the problem and decide the core functionalities that our solution should have. By analysing a problem, we would be able to figure out what are the inputs that our program should accept and the outputs that it should produce.

It is essential to device a solution before writing a program code for a given problem. The solution is represented in natural language and is called an algorithm. We can imagine an algorithm like a very well-written recipe for a dish, with clearly defined steps that, if followed, one will end up preparing the dish.

We start with a tentative solution plan and keep on refining the algorithm until the algorithm is able to capture all the aspects of the desired solution. For a given problem, more than one algorithm is possible and we have to select the most suitable solution.

After finalising the algorithm, we need to convert the algorithm into the format which can be understood by the computer to generate the desired solution. Different high level programming languages can be used for writing a program. It is equally important to record the details of the coding procedures followed and document the solution. This is helpful when revisiting the programs at a later stage.

The program created should be tested on various parameters. The program should meet the requirements of the user. It must respond within the expected time. It should generate correct output for all possible inputs. In the presence of syntactical errors, no output will be obtained. In case the output generated is incorrect, then the program should be checked for logical errors, if any.

Software industry follows standardised testing methods like unit or component testing, integration testing, system testing, and acceptance testing while developing complex applications. This is to ensure that the software meets all the business and technical requirements and works as expected.

The errors or defects found in the testing phases are debugged or rectified and the program is again tested. This continues till all the errors are removed from the program. Once the software application has been developed, tested and delivered to the user, still problems in terms of functioning can come up and need to be resolved from time to time.

The maintenance of the solution, thus, involves fixing the problems faced by the user, answering the queries of the user and even serving the request for addition or modification of features.

## Representation of Algorithms

Using their algorithmic thinking skills, the software designers or programmers analyse the problem and identify the logical steps that need to be followed to reach a solution. Once the steps are identified, the need is to write down these steps along with the required input and desired output.

There are two common methods of representing an algorithm —flowchart and pseudocode. Either of the methods can be used to represent an algorithm while keeping in mind the following:

- It showcases the logic of the problem solution, excluding any implementational details.
- It clearly reveals the flow of control during execution of the program.

A flowchart is a visual representation of an algorithm . A flowchart is a diagram made up of boxes, diamonds and other shapes, connected by arrows. Each shape represents a step of the solution process and the arrow represents the order or link among the steps.

A flow chart is a step by step diagrammatic representation of the logic paths to solve a given problem. Or A flowchart is visual or graphical representation of an algorithm .

The flowcharts are pictorial representation of the methods to b used to solve a given problem and help a great deal to analyze the problem and plan its solution in a systematic and orderly manner. A flowchart when translated in to a proper computer language, results in a complete program.

## Advantages of Flowcharts:

- The flowchart shows the logic of a problem displayed in pictorial fashion which felicitates easier checking of an algorithm
- The Flowchart is good means of communication to other users. It is also a compact means of recording an algorithm solution to a problem.
- The flowchart allows the problem solver to break the problem into parts. These parts can be connected to make master chart.
- The flowchart is a permanent record of the solution which can be consulted at a later time.

## Differences between Algorithm and Flowchart

Pseudo code.

The Pseudo code is neither an algorithm nor a program. It is an abstract form of a program. It consists of English like statements which perform the specific operations. It is defined for an algorithm. It does not use any graphical representation.

In pseudo code , the program is represented in terms of words and phrases, but the syntax of program is not strictly followed.

## Advantages of Pseudocode

- Before writing codes in a high level language, a pseudocode of a program helps in representing the basic functionality of the intended program.
- By writing the code first in a human readable language, the programmer safeguards against leaving out any important step. Besides, for non-programmers, actual programs are difficult to read and understand.
- But pseudocode helps them to review the steps to confirm that the proposed implementation is going to achieve the desire output.

Related posts:

## 10 Types of Computers | History of Computers, Advantages

What is microprocessor evolution of microprocessor, types, features, types of computer memory, characteristics, primary memory, secondary memory, data and information: definition, characteristics, types, channels, approaches, what is cloud computing classification, characteristics, principles, types of cloud providers, what is debugging types of errors, types of storage devices, advantages, examples, 10 evolution of computing machine, history.

- What are Functions of Operating System? 6 Functions

## Advantages and Disadvantages of Operating System

- Data Representation in Computer: Number Systems, Characters, Audio, Image and Video

## What are Data Types in C++? Types

What are operators in c different types of operators in c.

- What are Expressions in C? Types

## What are Decision Making Statements in C? Types

You might also like.

## Generations of Computer First To Fifth, Classification, Characteristics, Features, Examples

## What is Flowchart in Programming? Symbols, Advantages, Preparation

## What is operating system? Functions, Types, Types of User Interface

What are c++ keywords set of 59 keywords in c ++.

## What is Computer System? Definition, Characteristics, Functional Units, Components

## What is Artificial Intelligence? Functions, 6 Benefits, Applications of AI

## What is C++ Programming Language? C++ Character Set, C++ Tokens

## Types of Computer Software: Systems Software, Application Software

- Entrepreneurship
- Organizational Behavior
- Financial Management
- Communication
- Human Resource Management
- Sales Management
- Marketing Management

- Bipolar Disorder
- Therapy Center
- When To See a Therapist
- Types of Therapy
- Best Online Therapy
- Best Couples Therapy
- Best Family Therapy
- Managing Stress
- Sleep and Dreaming
- Understanding Emotions
- Self-Improvement
- Healthy Relationships
- Student Resources
- Personality Types
- Guided Meditations
- Verywell Mind Insights
- 2024 Verywell Mind 25
- Mental Health in the Classroom
- Editorial Process
- Meet Our Review Board
- Crisis Support

## Overview of the Problem-Solving Mental Process

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Rachel Goldman, PhD FTOS, is a licensed psychologist, clinical assistant professor, speaker, wellness expert specializing in eating behaviors, stress management, and health behavior change.

- Identify the Problem
- Define the Problem
- Form a Strategy
- Organize Information
- Allocate Resources
- Monitor Progress
- Evaluate the Results

## Frequently Asked Questions

Problem-solving is a mental process that involves discovering, analyzing, and solving problems. The ultimate goal of problem-solving is to overcome obstacles and find a solution that best resolves the issue.

The best strategy for solving a problem depends largely on the unique situation. In some cases, people are better off learning everything they can about the issue and then using factual knowledge to come up with a solution. In other instances, creativity and insight are the best options.

It is not necessary to follow problem-solving steps sequentially, It is common to skip steps or even go back through steps multiple times until the desired solution is reached.

In order to correctly solve a problem, it is often important to follow a series of steps. Researchers sometimes refer to this as the problem-solving cycle. While this cycle is portrayed sequentially, people rarely follow a rigid series of steps to find a solution.

The following steps include developing strategies and organizing knowledge.

## 1. Identifying the Problem

While it may seem like an obvious step, identifying the problem is not always as simple as it sounds. In some cases, people might mistakenly identify the wrong source of a problem, which will make attempts to solve it inefficient or even useless.

Some strategies that you might use to figure out the source of a problem include :

- Asking questions about the problem
- Breaking the problem down into smaller pieces
- Looking at the problem from different perspectives
- Conducting research to figure out what relationships exist between different variables

## 2. Defining the Problem

After the problem has been identified, it is important to fully define the problem so that it can be solved. You can define a problem by operationally defining each aspect of the problem and setting goals for what aspects of the problem you will address

At this point, you should focus on figuring out which aspects of the problems are facts and which are opinions. State the problem clearly and identify the scope of the solution.

## 3. Forming a Strategy

After the problem has been identified, it is time to start brainstorming potential solutions. This step usually involves generating as many ideas as possible without judging their quality. Once several possibilities have been generated, they can be evaluated and narrowed down.

The next step is to develop a strategy to solve the problem. The approach used will vary depending upon the situation and the individual's unique preferences. Common problem-solving strategies include heuristics and algorithms.

- Heuristics are mental shortcuts that are often based on solutions that have worked in the past. They can work well if the problem is similar to something you have encountered before and are often the best choice if you need a fast solution.
- Algorithms are step-by-step strategies that are guaranteed to produce a correct result. While this approach is great for accuracy, it can also consume time and resources.

Heuristics are often best used when time is of the essence, while algorithms are a better choice when a decision needs to be as accurate as possible.

## 4. Organizing Information

Before coming up with a solution, you need to first organize the available information. What do you know about the problem? What do you not know? The more information that is available the better prepared you will be to come up with an accurate solution.

When approaching a problem, it is important to make sure that you have all the data you need. Making a decision without adequate information can lead to biased or inaccurate results.

## 5. Allocating Resources

Of course, we don't always have unlimited money, time, and other resources to solve a problem. Before you begin to solve a problem, you need to determine how high priority it is.

If it is an important problem, it is probably worth allocating more resources to solving it. If, however, it is a fairly unimportant problem, then you do not want to spend too much of your available resources on coming up with a solution.

At this stage, it is important to consider all of the factors that might affect the problem at hand. This includes looking at the available resources, deadlines that need to be met, and any possible risks involved in each solution. After careful evaluation, a decision can be made about which solution to pursue.

## 6. Monitoring Progress

After selecting a problem-solving strategy, it is time to put the plan into action and see if it works. This step might involve trying out different solutions to see which one is the most effective.

It is also important to monitor the situation after implementing a solution to ensure that the problem has been solved and that no new problems have arisen as a result of the proposed solution.

Effective problem-solvers tend to monitor their progress as they work towards a solution. If they are not making good progress toward reaching their goal, they will reevaluate their approach or look for new strategies .

## 7. Evaluating the Results

After a solution has been reached, it is important to evaluate the results to determine if it is the best possible solution to the problem. This evaluation might be immediate, such as checking the results of a math problem to ensure the answer is correct, or it can be delayed, such as evaluating the success of a therapy program after several months of treatment.

Once a problem has been solved, it is important to take some time to reflect on the process that was used and evaluate the results. This will help you to improve your problem-solving skills and become more efficient at solving future problems.

## A Word From Verywell

It is important to remember that there are many different problem-solving processes with different steps, and this is just one example. Problem-solving in real-world situations requires a great deal of resourcefulness, flexibility, resilience, and continuous interaction with the environment.

## Get Advice From The Verywell Mind Podcast

Hosted by therapist Amy Morin, LCSW, this episode of The Verywell Mind Podcast shares how you can stop dwelling in a negative mindset.

Follow Now : Apple Podcasts / Spotify / Google Podcasts

You can become a better problem solving by:

- Practicing brainstorming and coming up with multiple potential solutions to problems
- Being open-minded and considering all possible options before making a decision
- Breaking down problems into smaller, more manageable pieces
- Asking for help when needed
- Researching different problem-solving techniques and trying out new ones
- Learning from mistakes and using them as opportunities to grow

It's important to communicate openly and honestly with your partner about what's going on. Try to see things from their perspective as well as your own. Work together to find a resolution that works for both of you. Be willing to compromise and accept that there may not be a perfect solution.

Take breaks if things are getting too heated, and come back to the problem when you feel calm and collected. Don't try to fix every problem on your own—consider asking a therapist or counselor for help and insight.

If you've tried everything and there doesn't seem to be a way to fix the problem, you may have to learn to accept it. This can be difficult, but try to focus on the positive aspects of your life and remember that every situation is temporary. Don't dwell on what's going wrong—instead, think about what's going right. Find support by talking to friends or family. Seek professional help if you're having trouble coping.

Davidson JE, Sternberg RJ, editors. The Psychology of Problem Solving . Cambridge University Press; 2003. doi:10.1017/CBO9780511615771

Sarathy V. Real world problem-solving . Front Hum Neurosci . 2018;12:261. Published 2018 Jun 26. doi:10.3389/fnhum.2018.00261

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

- Runestone in social media: Follow @iRunestone Our Facebook Page
- Table of Contents
- Assignments
- Peer Instruction (Instructor)
- Peer Instruction (Student)
- Change Course
- Instructor's Page
- Progress Page
- Edit Profile
- Change Password
- Scratch ActiveCode
- Scratch Activecode
- Instructors Guide
- About Runestone
- Report A Problem
- This Chapter
- 1. Introduction' data-toggle="tooltip" >

## Problem Solving with Algorithms and Data Structures using Python ¶

By Brad Miller and David Ranum, Luther College

There is a wonderful collection of YouTube videos recorded by Gerry Jenkins to support all of the chapters in this text.

- 1.1. Objectives
- 1.2. Getting Started
- 1.3. What Is Computer Science?
- 1.4. What Is Programming?
- 1.5. Why Study Data Structures and Abstract Data Types?
- 1.6. Why Study Algorithms?
- 1.7. Review of Basic Python
- 1.8.1. Built-in Atomic Data Types
- 1.8.2. Built-in Collection Data Types
- 1.9.1. String Formatting
- 1.10. Control Structures
- 1.11. Exception Handling
- 1.12. Defining Functions
- 1.13.1. A Fraction Class
- 1.13.2. Inheritance: Logic Gates and Circuits
- 1.14. Summary
- 1.15. Key Terms
- 1.16. Discussion Questions
- 1.17. Programming Exercises
- 2.1.1. A Basic implementation of the MSDie class
- 2.2. Making your Class Comparable
- 3.1. Objectives
- 3.2. What Is Algorithm Analysis?
- 3.3. Big-O Notation
- 3.4.1. Solution 1: Checking Off
- 3.4.2. Solution 2: Sort and Compare
- 3.4.3. Solution 3: Brute Force
- 3.4.4. Solution 4: Count and Compare
- 3.5. Performance of Python Data Structures
- 3.7. Dictionaries
- 3.8. Summary
- 3.9. Key Terms
- 3.10. Discussion Questions
- 3.11. Programming Exercises
- 4.1. Objectives
- 4.2. What Are Linear Structures?
- 4.3. What is a Stack?
- 4.4. The Stack Abstract Data Type
- 4.5. Implementing a Stack in Python
- 4.6. Simple Balanced Parentheses
- 4.7. Balanced Symbols (A General Case)
- 4.8. Converting Decimal Numbers to Binary Numbers
- 4.9.1. Conversion of Infix Expressions to Prefix and Postfix
- 4.9.2. General Infix-to-Postfix Conversion
- 4.9.3. Postfix Evaluation
- 4.10. What Is a Queue?
- 4.11. The Queue Abstract Data Type
- 4.12. Implementing a Queue in Python
- 4.13. Simulation: Hot Potato
- 4.14.1. Main Simulation Steps
- 4.14.2. Python Implementation
- 4.14.3. Discussion
- 4.15. What Is a Deque?
- 4.16. The Deque Abstract Data Type
- 4.17. Implementing a Deque in Python
- 4.18. Palindrome-Checker
- 4.19. Lists
- 4.20. The Unordered List Abstract Data Type
- 4.21.1. The Node Class
- 4.21.2. The Unordered List Class
- 4.22. The Ordered List Abstract Data Type
- 4.23.1. Analysis of Linked Lists
- 4.24. Summary
- 4.25. Key Terms
- 4.26. Discussion Questions
- 4.27. Programming Exercises
- 5.1. Objectives
- 5.2. What Is Recursion?
- 5.3. Calculating the Sum of a List of Numbers
- 5.4. The Three Laws of Recursion
- 5.5. Converting an Integer to a String in Any Base
- 5.6. Stack Frames: Implementing Recursion
- 5.7. Introduction: Visualizing Recursion
- 5.8. Sierpinski Triangle
- 5.9. Complex Recursive Problems
- 5.10. Tower of Hanoi
- 5.11. Exploring a Maze
- 5.12. Dynamic Programming
- 5.13. Summary
- 5.14. Key Terms
- 5.15. Discussion Questions
- 5.16. Glossary
- 5.17. Programming Exercises
- 6.1. Objectives
- 6.2. Searching
- 6.3.1. Analysis of Sequential Search
- 6.4.1. Analysis of Binary Search
- 6.5.1. Hash Functions
- 6.5.2. Collision Resolution
- 6.5.3. Implementing the Map Abstract Data Type
- 6.5.4. Analysis of Hashing
- 6.6. Sorting
- 6.7. The Bubble Sort
- 6.8. The Selection Sort
- 6.9. The Insertion Sort
- 6.10. The Shell Sort
- 6.11. The Merge Sort
- 6.12. The Quick Sort
- 6.13. Summary
- 6.14. Key Terms
- 6.15. Discussion Questions
- 6.16. Programming Exercises
- 7.1. Objectives
- 7.2. Examples of Trees
- 7.3. Vocabulary and Definitions
- 7.4. List of Lists Representation
- 7.5. Nodes and References
- 7.6. Parse Tree
- 7.7. Tree Traversals
- 7.8. Priority Queues with Binary Heaps
- 7.9. Binary Heap Operations
- 7.10.1. The Structure Property
- 7.10.2. The Heap Order Property
- 7.10.3. Heap Operations
- 7.11. Binary Search Trees
- 7.12. Search Tree Operations
- 7.13. Search Tree Implementation
- 7.14. Search Tree Analysis
- 7.15. Balanced Binary Search Trees
- 7.16. AVL Tree Performance
- 7.17. AVL Tree Implementation
- 7.18. Summary of Map ADT Implementations
- 7.19. Summary
- 7.20. Key Terms
- 7.21. Discussion Questions
- 7.22. Programming Exercises
- 8.1. Objectives
- 8.2. Vocabulary and Definitions
- 8.3. The Graph Abstract Data Type
- 8.4. An Adjacency Matrix
- 8.5. An Adjacency List
- 8.6. Implementation
- 8.7. The Word Ladder Problem
- 8.8. Building the Word Ladder Graph
- 8.9. Implementing Breadth First Search
- 8.10. Breadth First Search Analysis
- 8.11. The Knight’s Tour Problem
- 8.12. Building the Knight’s Tour Graph
- 8.13. Implementing Knight’s Tour
- 8.14. Knight’s Tour Analysis
- 8.15. General Depth First Search
- 8.16. Depth First Search Analysis
- 8.17. Topological Sorting
- 8.18. Strongly Connected Components
- 8.19. Shortest Path Problems
- 8.20. Dijkstra’s Algorithm
- 8.21. Analysis of Dijkstra’s Algorithm
- 8.22. Prim’s Spanning Tree Algorithm
- 8.23. Summary
- 8.24. Key Terms
- 8.25. Discussion Questions
- 8.26. Programming Exercises

## Acknowledgements ¶

We are very grateful to Franklin Beedle Publishers for allowing us to make this interactive textbook freely available. This online version is dedicated to the memory of our first editor, Jim Leisy, who wanted us to “change the world.”

## Indices and tables ¶

Search Page

## Why are algorithms called algorithms? A brief history of the Persian polymath you’ve likely never heard of

Digital Health Research Fellow, The University of Melbourne

## Disclosure statement

Debbie Passey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

University of Melbourne provides funding as a founding partner of The Conversation AU.

View all partners

Algorithms have become integral to our lives. From social media apps to Netflix, algorithms learn your preferences and prioritise the content you are shown. Google Maps and artificial intelligence are nothing without algorithms.

So, we’ve all heard of them, but where does the word “algorithm” even come from?

Over 1,000 years before the internet and smartphone apps, Persian scientist and polymath Muhammad ibn Mūsā al-Khwārizmī invented the concept of algorithms.

In fact, the word itself comes from the Latinised version of his name, “algorithmi”. And, as you might suspect, it’s also related to algebra.

## Largely lost to time

Al-Khwārizmī lived from 780 to 850 CE, during the Islamic Golden Age . He is considered the “ father of algebra ”, and for some, the “ grandfather of computer science ”.

Yet, few details are known about his life. Many of his original works in Arabic have been lost to time.

It is believed al-Khwārizmī was born in the Khwarazm region south of the Aral Sea in present-day Uzbekistan. He lived during the Abbasid Caliphate, which was a time of remarkable scientific progress in the Islamic Empire.

Al-Khwārizmī made important contributions to mathematics, geography, astronomy and trigonometry. To help provide a more accurate world map, he corrected Alexandrian polymath Ptolemy’s classic cartography book, Geographia.

He produced calculations for tracking the movement of the Sun, Moon and planets. He also wrote about trigonometric functions and produced the first table of tangents.

Al-Khwārizmī was a scholar in the House of Wisdom ( Bayt al-Hikmah ) in Baghdad. At this intellectual hub , scholars were translating knowledge from around the world into Arabic, synthesising it to make meaningful progress in a range of disciplines. This included mathematics, a field deeply connected to Islam .

## The ‘father of algebra’

Al-Khwārizmī was a polymath and a religious man. His scientific writings started with dedications to Allah and the Prophet Muhammad. And one of the major projects Islamic mathematicians undertook at the House of Wisdom was to develop algebra.

Around 830 CE, Caliph al-Ma’mun encouraged al-Khwārizmī to write a treatise on algebra , Al-Jabr (or The Compendious Book on Calculation by Completion and Balancing). This became his most important work.

At this point, “algebra” had been around for hundreds of years, but al-Khwārizmī was the first to write a definitive book on it. His work was meant to be a practical teaching tool. Its Latin translation was the basis for algebra textbooks in European universities until the 16th century.

In the first part, he introduced the concepts and rules of algebra, and methods for calculating the volumes and areas of shapes. In the second part he provided real-life problems and worked out solutions, such as inheritance cases, the partition of land and calculations for trade.

Al-Khwārizmī didn’t use modern-day mathematical notation with numbers and symbols. Instead, he wrote in simple prose and employed geometric diagrams:

Four roots are equal to twenty, then one root is equal to five, and the square to be formed of it is twenty-five.

In modern-day notation we’d write that like so:

4x = 20, x = 5, x 2 = 25

## Grandfather of computer science

Al-Khwārizmī’s mathematical writings introduced the Hindu-Arabic numerals to Western mathematicians. These are the ten symbols we all use today: 1, 2, 3, 4, 5, 6, 7, 8, 9, 0.

The Hindu-Arabic numerals are important to the history of computing because they use the number zero and a base-ten decimal system. Importantly, this is the numeral system that underpins modern computing technology.

Al-Khwārizmī’s art of calculating mathematical problems laid the foundation for the concept of algorithms . He provided the first detailed explanations for using decimal notation to perform the four basic operations (addition, subtraction, multiplication, division) and computing fractions.

This was a more efficient computation method than using the abacus. To solve a mathematical equation, al-Khwārizmī systematically moved through a sequence of steps to find the answer. This is the underlying concept of an algorithm.

Algorism , a Medieval Latin term named after al-Khwārizmī, refers to the rules for performing arithmetic using the Hindu-Arabic numeral system. Translated to Latin, al-Khwārizmī’s book on Hindu numerals was titled Algorithmi de Numero Indorum.

In the early 20th century, the word algorithm came into its current definition and usage: “a procedure for solving a mathematical problem in a finite number of steps; a step-by-step procedure for solving a problem”.

Muhammad ibn Mūsā al-Khwārizmī played a central role in the development of mathematics and computer science as we know them today.

The next time you use any digital technology – from your social media feed to your online bank account to your Spotify app – remember that none of it would be possible without the pioneering work of an ancient Persian polymath.

Correction: This article was amended to correct a quote from al-Khwārizmī’s work.

- Mathematics
- History of computing

## Events and Communications Coordinator

## Assistant Editor - 1 year cadetship

## Executive Dean, Faculty of Health

## Lecturer/Senior Lecturer, Earth System Science (School of Science)

## Sydney Horizon Educators (Identified)

## Learning search algorithm: framework and comprehensive performance for solving optimization problems

- Open access
- Published: 09 May 2024
- Volume 57 , article number 139 , ( 2024 )

## Cite this article

You have full access to this open access article

- Chiwen Qu 1 ,
- Xiaoning Peng 2 &
- Qilan Zeng 3

In this study, the Learning Search Algorithm (LSA) is introduced as an innovative optimization algorithm that draws inspiration from swarm intelligence principles and mimics the social learning behavior observed in humans. The LSA algorithm optimizes the search process by integrating historical experience and real-time social information, enabling it to effectively navigate complex problem spaces. By doing so, it enhances its global development capability and provides efficient solutions to challenging optimization tasks. Additionally, the algorithm improves the collective learning capacity by incorporating teaching and active learning behaviors within the population, leading to improved local development capabilities. Furthermore, a dynamic adaptive control factor is utilized to regulate the algorithm’s global exploration and local development abilities. The proposed algorithm is rigorously evaluated using 40 benchmark test functions from IEEE CEC 2014 and CEC 2020, and compared against nine established evolutionary algorithms as well as 11 recently improved algorithms. The experimental results demonstrate the superiority of the LSA algorithm, as it achieves the top rank in the Friedman rank-sum test, highlighting its power and competitiveness. Moreover, the LSA algorithm is successfully applied to solve six real-world engineering problems and 15 UCI datasets of feature selection problems, showcasing its significant advantages and potential for practical applications in engineering problems and feature selection problems.

Avoid common mistakes on your manuscript.

## 1 Introduction

All production or social activities that human beings are engaged in are purposeful. The activity is always under the control of specific values or aesthetic orientations, and it often faces a decision problem of the feasible or even optimal scheme, i.e., an optimization problem (Martello et al. 1984 ). In recent years, the importance of optimization in engineering design, disease identification, and other issues has been recognized. Specifically, taking the objective function as the predefined measure of decision quality, the best decision or solution to predefined design problems can be obtained by evaluating various methods. Optimization is a problem that people often encounter in the production practice of scientific research and social transformation. It has the characteristics of unknown search space, non-differentiability of the objective function, high-dimensional, and non-convex (Ezugwu 2022 ). Generally, optimization techniques can be roughly divided into deterministic and non-deterministic ones (Parsopoulos and Vrahatis 2002 ). Deterministic methods are usually gradient-based and are further divided into linear and nonlinear ones. Although these methods help to solve linear and nonlinear optimization problems, they will fall into local optimization when dealing with non-differentiable optimization problems, so they cannot solve such problems in the minimum time or with accurate complexity. The non-deterministic method uses a random generation strategy to find the near-optimal solution in the problem space, which has the advantage of simple implementation and no gradient-related information (Li et al. 2011 ; Liu et al. 2013 ).

With the continuous expansion of engineering application fields and the continuous improvement of the complexity and difficulty of optimization problems, the need for optimization technology is becoming more and more obvious. As a class of non-deterministic methods (random search methods), meta-heuristic algorithms have demonstrated excellent performance when tackling challenges involving multiple-peak, discontinuous, and non-differentiable problems. Therefore, meta-heuristic algorithms have gained significant popularity in efficiently tackling diverse practical optimization problems across numerous fields, such as function optimization (Seo et al. 2006 ; Pan et al. 2006 ), pipeline scheduling (Rejowski and Pinto 2003 ; Li et al. 2021 ), optimization of neural network parameters (Abdolrasol et al. 2021 ; Abd Elaziz et al. 2021 ), key gene identification (Mandal and Mukhopadhyay 2015 ), image segmentation (Chander et al. 2011 ; Pham et al. 2018 ), parameter identification of photovoltaic module models (Ibrahim et al. 2020 ; Liang et al. 2020 ), optimization of engineering design, etc. (Kundu and Garg 2022 ; Onay and Aydemı̇r, S. B. 2022 ).

In terms of formation principle, meta-heuristic methods can be categorized into four distinct groups, each offering unique approaches to solving optimization problems encountered in various disciplines: the methods based on the evolutionary mechanism, the methods based on physical principles, the methods based on swarm intelligence, and the methods based on human social activities (Sharma et al. 2022 ; Wilson et al. 2022 ; Tian et al. 2022 ; Ewees et al. 2022 ). Section 2 offers an extensive compilation of the comprehensive literature on the development and application of numerous novel metaheuristics across various domains.

Meanwhile, many scholars have optimized the original basic algorithm to solve the optimization problems of real-world engineering applications more efficiently. For example, due to the large randomness and uncertainty of the randomly generated initial population, many scholars have improved the initial population by using chaos mapping (Tutueva et al. 2020 ), reverse learning (Ruan et al. 2022 ), Sobol sequence (Sun et al. 2021 ), square neighborhood topology (Rachdi et al. 2020 ), and other strategies to achieve algorithm optimization and convergence performance improvement. Some scholars have adopted strategies such as sine–cosine optimization (Chen et al. 2020a ), Gaussian walk (Khalilpourazari et al. 2021 ), Levy flight (Kaidi et al. 2022 ), and quantum behavior (Zhang et al. 2021a ) to optimize individual iterative updates. Also, nonlinear inertia weight (Yu et al. 2020 ), horizontal cross (Chen et al. 2019 ), spiral update (Lin et al. 2022 ), and other approaches have been employed to achieve the balance between the global and local development of the algorithm. Moreover, some scholars have combined the advantages of two or more algorithms and proposed an improved hybrid strategy algorithm (Shi et al. 2005 ; Yaghini et al. 2013 ; Qiao et al. 2020 ), such as the exploratory cuckoo search (ECS) algorithm proposed by Abed-Alguni et al. (Abed-Alguni et al. 2021 ) and the improved SSA algorithm proposed by Dorian (Dorian Sidea 2024 ).

In metaheuristic algorithms, the exploration phase of the global search space has the ability to escape local optima, while the exploitation phase enhances the algorithm’s precise search capability within local regions. Balancing exploration and exploitation is a challenging task in every metaheuristic algorithm, as it affects whether the algorithm can find the global optimum solution. In general, metaheuristic algorithms offer different performances in solving optimization problems due to their different operations and mechanisms. According to the “No Free Lunch” (NFL) theorem (Qiao et al. 2020 ), all metaheuristic algorithms have the same average performance when solving optimization problems. In other words, there is no optimal algorithm that can solve all optimization problems, which means that the performance of different algorithms varies when providing prior knowledge to solve specific problems with metaheuristic algorithms. Finding the most suitable algorithm for each specific type of optimization problem remains a challenge. Each metaheuristic algorithm has its unique characteristics as they draw inspiration from different natural or biological behaviors. To evaluate performance and find suitable application areas, metaheuristic algorithms require comprehensive testing on various benchmark functions and real-world applications, and continual improvement. These reasons support the innovation and design of metaheuristic algorithms to solve a wide range of optimization problems.

This paper proposes a novel optimization algorithm, called the learning search algorithm (LSA), which is inspired by human learning behaviors and promotes both global exploration and local development phases simultaneously. The LSA algorithm involves dynamic adaptive global exploration to local development phase control parameters, which enhance its global search ability and avoid falling into local optima. In the local development phase, the algorithm exploits the teaching behavior of the model in the current population to actively learn behaviors from role models and improve the learning ability of the entire population. The proposed LSA algorithm is evaluated on 40 benchmark functions from IEEE CEC2014 and CEC2020, 6 real-world engineering optimization problems and 15 feature selection cases in the UCI dataset. Contrasted with nine high-performance algorithms and eleven recently proposed algorithms, the LSA algorithm shows promising results in terms of convergence speed, search accuracy, and scalability. The experiment results suggest that the LSA algorithm outperforms the selected comparison algorithms on most of the selected test problems. The statistical analysis further confirms the proposed algorithm’s superiority by conducting the Wilcoxon signed-rank test and the Friedman rank-sum test.

The paper presents several significant contributions:

In terms of learning mechanism, the LSA algorithm simulates the process of human knowledge acquisition. In this algorithm, a historical experience knowledge base is established, from which humans learn knowledge. Humans also possess the ability of active learning and knowledge acquisition from exemplary radiations. Additionally, the learning outcomes continuously update the historical experience knowledge base.

Regarding its adaptability, the LSA algorithm exhibits a high level of adaptability. In each iteration process, knowledge and information flow constantly between the historical experience knowledge base and the current individuals. This enables the LSA algorithm to adaptively adjust the search strategy based on the complexity and characteristics of the problem, thereby enhancing the search efficiency and solution quality.

The concept of idea transmission is another important aspect of the LSA algorithm. It improves the solutions of individuals through the transmission of ideas. The algorithm transfers excellent search ideas to learning individuals based on historical experiences and the most outstanding solutions of the population.

Furthermore, the LSA algorithm possesses interpretability and ease of implementation. Its ideas are relatively simple and intuitive, making it easy to understand and implement. The role switch between individuals and the process of knowledge transmission can be explained and analyzed, allowing users of the algorithm to better understand the optimization process.

The remaining sections of this paper are organized as follows: In Section 2 , an overview of the literature on metaheuristic algorithms is provided. Section 3 provides a detailed introduction to the principle and mathematical model of the LSA algorithm. In Section 4 , comprehensive experiments are conducted to demonstrate the superiority of the LSA algorithm over comparative optimization algorithms. Finally, Section 5 presents the conclusions of this paper.

## 2 Literature review

This section provides an overview of the current advancements in metaheuristics. In recent times, numerous metaheuristic algorithms have been introduced and extensively studied. These algorithms primarily fall into four categories: (1) swarm-based algorithms that emulate swarm intelligence, (2) evolutionary-based algorithms that draw inspiration from natural evolutionary processes, (3) physics or chemistry-based algorithms that are inspired by physical or chemical phenomena, and (4) social or human-based algorithms that are influenced by human or social behaviors. Table 1 presents a compilation of notable and recently developed metaheuristic algorithms.

Natural evolution algorithms are developed from biological phenomena such as natural evolution, and the representative algorithm is the GA algorithm (Mirjalili 2019 ); Other algorithms include the differential evolution (DE) algorithm (Das and Suganthan 2010 ), evolution strategy (ES) (Schwefel and Rudolph 1995 ), memetic algorithm (MA) (Moscato et al. 2004 ), and genetic programming (GP) (Sette and Boullart 2001 ). Swarm intelligence optimization algorithms, which simulate the social behavior of animals based on group foraging behaviors, have attracted increasing attention. Notable algorithms in this domain include the particle swarm optimization (PSO) algorithm (Poli et al. 2007 ), the crow search algorithm (CSA) (Askarzadeh 2016 ), cuckoo search (CS) algorithm (Yang and Deb 2014 ), the social spider algorithm (SSA) (James and Li 2015 ), the sparrow search algorithm (SSA) (Xue and Shen 2020 ), the red fox optimization (RFO) algorithm (Połap and Woźniak 2021 ), the salp swarm algorithm (SSA) (Mirjalili et al. 2017 ), dolphin partner optimization (DPO) (Dhanya and Arivudainambi 2019 ), Lion Optimization Algorithm (LOA) (Yazdani and Jolai 2016 ), dingoes hunting strategies (DHS) (Peraza-Vázquez et al. 2021 ), mycorrhiza tree optimization algorithm (MTOA) (Carreon-Ortiz and Valdez 2022 ), charged system search (CSS) (Kaveh and Talatahari 2010 ), chameleon swarm algorithm (CSA) (Braik 2021 ), wild horse optimizer (WHO) (Naruei and Keynia 2022 ), mayfly optimization algorithm (MOA) (Zervoudakis and Tsafarakis 2020 ), capuchin search (CSA) (Braik et al. 2021 ), Zebra Optimization Algorithm (ZOA) (Trojovská et al. 2022 ), Tasmanian Devil Optimization (TDO) (Dehghani et al. 2022a ), Artificial rabbits optimization (RSO) (Wang et al. 2022a ), Osprey Optimization Algorithm (OOA) (Dehghani and Trojovský 2023 ), Exponential distribution optimizer (EDO) (Abdel-Basset et al. 2023 ), and others. These algorithms have demonstrated promising performance in solving various complex optimization problems. Besides, there is a class of physical search algorithms based on the simulation of physical phenomena, such as the simulated annealing (SA) (Bertsimas and Tsitsiklis 1993 ), gravitational search algorithm (GSA) (Saremi et al. 2017 ), curved space optimization (CSO) (Moghaddam and Moghaddam 2012 ), lighting attachment procedure optimization (LAPO) (Nematollahi et al. 2017 ), black hole mechanics optimization (BHMO) (Kaveh et al. 2020a ), plasma generation optimization (PGO) (Kaveh et al. 2020b ), solid system algorithm (SSA) (Zitouni et al. 2020 ), atomic search algorithm (ASO) (Li et al. 2020a ), Heap-based Optimization (HBO) (Askari et al. 2020 ), Weighted mean of vectors algorithm(INFO) (Ahmadianfar et al. 2022 ), Exponential distribution optimizer(EDO) (Ayyarao et al. 2022 ), Subtraction-Average-Based Optimizer(SABO) (Trojovský and Dehghani 2023 ), etc. Some intelligent algorithms have been designed based on human social activities, such as the teaching–learning-based optimization (TLBO) (Rao et al. 2012 ), skill optimization algorithm (SOA) (Ramshanker and Chakraborty 2022 ), cooperative search algorithm (CSA) (Feng et al. 2021 ), human urbanization algorithm (HUA) (Kılkış and Kılkış 2019 ), heap-based optimization (HBO) (Askari et al. 2020 ), stock exchange trading optimization (SETO) (Emami 2022 ), arithmetical optimization algorithm (AOA) (Abualigah et al. 2021 ), Driving Training-Based Optimization (DTBO) (Dehghani et al. 2022b ), Chef-Based Optimization Algorithm (CBOA) (Trojovská and Dehghani 2022 ), War Strategy Optimization Algorithm (WSO), and so on. Additionally, in the past three years, many relatively new meta-heuristic algorithms have been proposed, and they are not classified into the mentioned categories. For example, inspired by the management strategy of the constitutional monarchy government, Ahmia et al., proposed the monarchy meta-heuristic (MN) optimization algorithm (Ahmia and Aider 2019 ). Brammya et al. utilized a simulation of human deer hunting behavior to propose a deer hunting optimization algorithm (DHOA) (Brammya et al. 2019 ). In a similar vein, Hayyolalam and Kazem devised a black widow optimization (BWO) algorithm (Hayyolalam and Kazem 2020 ), drawing inspiration from the mating behavior of black widow spiders. Nematollahi et al. introduced the Golden Ratio Optimization Method (GROM) as an optimization approach (Nematollahi et al. 2020 ). Li et al. developed the virus propagation optimization (VSO) algorithm (Li et al. 2020b ), which simulates the propagation process of the virus. Alsattar et al. proposed the bald eagle search (BES) algorithm (Alsattar et al. 2020 ) based on the hunting process of the bald eagle.

## 3 Learning search algorithm

This section provides the details and optimization procedure of the proposed learning search algorithm (LSA). The algorithm is inspired by human learning behaviors in the social environment, including the global learning behavior guided by historical experience and other social individuals, and the local learning behavior guided by role models. The analysis of the mathematical model and the realization process and time complexity of LSA is presented below.

## 3.1 The basics of MHSs and the proposed LSA method

The general framework of meta-heuristic search algorithms (MHSs) typically consists of three essential components: the selection guides mechanism, the search operators design, and the update mechanism design, as demonstrated in Fig. 1 (Kahraman et al. 2023 ).

General steps involved in the MHS process

The process of selecting candidate solutions from a population to guide the search process is a fundamental aspect of the MHS algorithm. Various methods exist for guiding this selection (Fig. 1 ), but the dominant approach is currently the survival theorem, which compares the fitness values of individuals within the population (Forrest 1993 ; Holland 1992 ). More recently, the Fitness-Distance Balance (FDB) has emerged as a promising new method for guiding selection in the MHS algorithm (Kahraman et al. 2022 ; Guvenc et al. 2021 ; Duman et al. 2023 ). Selecting excellent individuals as guidance is a critical step in MHS algorithms. Reasonable selection of individuals, balancing diversity and convergence, directly affects the efficiency and quality of search results. The driving force behind human progress is the ability to learn different perspectives and interpretations from different historical periods, cultivating critical thinking and analytical skills. By comparing and evaluating different historical events and interpretations, people can gain a better understanding of the complexity and diversity of history and draw their own conclusions. Additionally, role models serve as symbols of successful experiences, having achieved excellence in a particular field or skill. Humans can use the behavior, thinking, and decision-making of role models to guide the application of knowledge, avoiding some common mistakes and dilemmas. Following the mechanism of MHS algorithms, we can use historical experience and role models as guidance leaders in the search process, emphasizing the balance between diversity and convergence.

The design of search operators is a crucial element of MHS algorithms as it shapes the models simulating the distinct behaviors and survival skills unique to each population. The search operators for various MHS algorithms differ, including genetic-based crossover and mutation operators (Chen et al. 2020a ; Mirjalili 2019 ; Das and Suganthan 2010 ; Schwefel and Rudolph 1995 ; Holland 1992 ), operators based on swarm foraging behavior (Poli et al. 2007 ; Askarzadeh 2016 ; Yang and Deb 2014 ; James and Li 2015 ; Xue and Shen 2020 ; Połap and Woźniak 2021 ), operators based on physical natural phenomena (Bertsimas and Tsitsiklis 1993 ; Moghaddam and Moghaddam 2012 ; Li et al. 2020a ), and operators based on human social activities (Wilson et al. 2022 ; Tian et al. 2022 ; Ewees et al. 2022 ). A high capacity for summarizing experiences and imitative learning, as well as autonomous learning ability, is why human learning surpasses that of other organisms. The proposed algorithm embodies the subjective autonomy of human learning behavior and the diversity of learning approaches, fully embodying its potential for breakthroughs in MHS.

In MHS algorithms, the majority of update mechanisms employ a greedy approach based on fitness values (Yang and Deb 2014 ; Carreon-Ortiz and Valdez 2022 ; Saremi et al. 2017 ). This approach guarantees a balanced turnover of individuals within the population, ensuring that the introduction of a specific number of new individuals is accompanied by the removal of an equivalent number of existing individuals. An alternative approach for update mechanisms, referred to as the “direct” approach, is depicted in Fig. 1 , exemplified by the SCA (Trojovský and Dehghani 2022 ) and SMA (Li et al. 2020c ). In these algorithms, mutated individuals survive at each step of the search process, while previous individuals are eliminated. Furthermore, the NSM score-based approach has also proven to be an efficient method for update mechanisms (Kahraman et al. 2023 ). Human learning behavior can be characterized as “taking the essence, discarding the dross.” Unlike other organisms, humans possess the ability for reflection, critical thinking, and abstract reasoning. They can selectively choose valuable content that aids in personal learning and understanding, assimilating and integrating it into their own knowledge system. Thus, the update mechanism designed for the LSA algorithm adopts a greedy strategy based on fitness values.

## 3.2 Inspiration

A learning behavior encompasses the acquisition of behaviors by individuals, resulting from a combination of genetic and environmental factors and shaped by life experiences. For human beings, learning is not only an activity of simply adapting to the environment but also has social significance. Therefore, human learning has social characteristics, and this is mainly manifested in its indirect experience and positive initiatives.

Interaction with others allows individuals to acquire knowledge not only from their direct experiences but also from the collective historical experiences of human society. As human culture has evolved, society has accumulated a vast body of knowledge and experience, which has been transmitted through social inheritance. From birth, individuals in human society have the ability to assimilate the wisdom passed down by previous generations through interactions with teachers in educational institutions. Additionally, they also have the opportunity to acquire valuable social experiences through interactions with their contemporaries. This mode of indirect experiential learning is characterized by its rich and diverse content and form, setting it apart from learning processes observed in animals (McFarland et al. 1993 ; Bennett 2011 ), as depicted in Fig. 2 (a).

Depicts human learning behavior through two distinct avenues. Panel ( a ) highlights the role of historical experience and interaction with other individuals in the learning process. Panel ( b ) illustrates how active learning and mentorship from role models also contribute to effective learning outcomes

Animal learning is primarily an adaptive process driven by environmental factors, making it a passive endeavor. In contrast, human learning encompasses not only a desire to understand the world but also a determination to shape and alter it. Thus, humans engage in active interactions with their surroundings, learning through integration with the individuals they encounter. The purpose of human learning extends beyond merely satisfying physiological needs; it also encompasses the demands of social life. Consequently, humans possess a wide range of learning motivations and objectives. In their pursuit of these objectives, humans actively explore diverse and effective learning methods, a capability that surpasses the realm of animal learning (Schoenewolf 1990 ; Bruner 2009 , 1971 ), exemplified in Fig. 2 (b).

Inspired by the behaviors observed in human life and learning, this paper presents the Learning Search Algorithm (LSA) as a groundbreaking meta-heuristic approach. The LSA’s mathematical model is outlined below.

## 3.3 Mathematical model of the proposed algorithm

Human learning exhibits two distinct modes of behavior, characterized by different approaches to acquiring knowledge. One mode involves the utilization of historical experience and interactions with others, enabling a global search process. In this mode, individuals benefit from the indirect aspect of human learning, where accumulated wisdom and collective experiences guide their learning journey. The other mode involves the active participation of individuals, particularly the role model who represents the current optimal individual. This role model not only imparts knowledge to others but also actively engages in learning, thereby facilitating local search within the learning algorithm. This active aspect of human learning contributes to the refinement and fine-tuning of the individual’s knowledge. By incorporating these two modes of learning behavior, the Learning Search Algorithm (LSA) enriches and comprehensively expands the overall knowledge of the population. This algorithm integrates the global exploration facilitated by historical experiences and interactions, along with the localized refinement through active learning from the role model. Such integration leads to a synergistic effect, where the collective wisdom accumulated through historical experiences is combined with the adaptability and learning capabilities of individuals. Furthermore, the LSA incorporates autonomous control factor dynamics, ensuring a seamless wide-ranging exploration to precise refinement. This dynamic adaptation mechanism enables the algorithm to strike a balance between exploration and exploitation, allowing for efficient knowledge acquisition and optimization.

## 3.3.1 Initialization

In this investigation, we utilized a population-based swarm intelligence optimization technique referred to as the Learning Search Algorithm (LSA). LSA is designed to find optimal solutions by iteratively updating the individual candidate solutions within the population. The population’s position is modeled using a matrix, as demonstrated in Formula ( 1 ):

where, \(n\) represents the number of individuals, \(\dim\) indicates the dimensionality of the search space, and \(x_{i,j}\) represents j th dimension of individual i . It is noteworthy that each position is generated through uniform distribution, as illustrated in Formula ( 2 ):

where, \(rand(0,1)\) denotes a random number between 0 and 1, while \(ub_{j}\) and \(lb_{j}\) correspond to the upper and lower bound values, respectively.

Formula ( 3 ) provides a means of evaluating the fitness score for each individual in the search population. This score serves as a metric to assess their overall level of fitness within the context of the study.

In the LSA algorithm, the balance control factor \(\delta\) realizes the conversion from global exploration to fine-tuning in a dynamic and self-adaptive way, and the calculation method is:

where, the balance factors \({\delta}_{init}\) and \({\delta}_{final}\) refer to the initial and eventual values, respectively, while \({t}_{\max }\) signifies the maximum iteration count. Additionally, \(y^{t} \in ( - 1,1)\) is a chaotic sequence. The multiplication factors λ and γ are defined.

The selection of multiplication factors λ and γ is discussed herein. To ensure that the balance control factor δ remains within the interval (0, 1), we first analyze the selection of various values for γ (see Figs. 3 and 4 ) . Figure 4 indicates that the requirement is satisfied when 1.4 < γ < 2.2. Building on this observation, we further refine the selection of γ by testing values of 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, and 2.1 on 30 benchmark functions from CEC 2014. Test results show that the choice of γ minimally affects the LSA algorithm’s outcomes; however, slightly superior performance is observed when γ equals 2 (refer to Appendix Table 27 ). Therefore, for brevity, we set γ to 2 in this paper. Additionally, we explore the impact of λ on the LSA algorithm across various values. To ensure a balance between global exploration and local exploitation capabilities, the LSA algorithm should exhibit strong global exploration abilities in early iterations and potent local exploitation capabilities along with the ability to escape from local optima in later iterations. Consequently, some individuals in later iterations of LSA should conduct global exploration operations to prevent the algorithm from converging to local optima. As depicted in Fig. 3 , the value of λ should range between (0.5, 0.85) (when λ is too large, δ exceeds 1). To precisely determine the value of λ, experiments are conducted with λ set to 0.5, 0.6, 0.7, 0.75, and 0.8, respectively. Statistical analysis of the results in Appendix Table 26 reveals that setting λ to 0.75 yields 10 optimal results, making it the most favorable choice among these scenarios. In summary, setting λ to 0.75 and γ to 2 is deemed reasonable. Initially, \(\delta\) continually decreases, indicating the transition from wide-ranging exploration to focused searching. However, as the process proceeds, some individuals fall into local optimization, resulting in an increase of \(\delta\) . To mitigate this issue, \({\delta}_{init}\) and \({\delta}_{final}\) are assigned the values 1 and 0, respectively, to enable dynamic balancing between international and regional development in the proposed algorithm.

The range of the balance control factor δ when the multiplication factors λ take different values

The range of the balance control factor δ when the multiplication factors γ take different values

This approach ensures that the algorithm maintains a balance between wide-ranging exploration and local refinement during the entire iteration process, thus enabling it to achieve both objectives effectively. It increases the likelihood of conducting extensive global search in the initial stages of evolution and progressively shifts focus towards thorough local search in later stages. Consequently, this method optimally balances the algorithm’s capacity for broad-based exploration and targeted growth.

## 3.3.2 Exploration phase

The historical experiences of humanity offer valuable insights for the education of posterity and individual learning journeys. To make the algorithm have a strong global exploitation ability, this paper used the historical experience and other individual information to guide the search progress. The individuals in the population learn from past events and the current inhabitants at a random probability, and the corresponding updated schematic diagram is illustrated in Fig. 5 (c). The mathematical model is shown in Formula ( 6 ).

where, \({x}_{i}^{t}\) represents the i th learning individual, while \({x}_{j}^{t}\) denotes an individual selected at random from the identical population. The new individual is denoted by \({x}_{i}^{t + 1}\) , and \(rand\) is randomly generated with support [0,1]. Additionally, \({history}_{r}^{t}\) represents an individual randomly selected from the past experience database. In the initial phase of populating, Formula ( 2 ) is utilized to produce a population of equal size, with \(x\) being assigned random values. In every iteration, \(history\) is modified using Eq. ( 7 ), and the matrix of \(history\) suffers random variations according to Formula ( 8 ).

where, \({\text{a}}\) and \({\text{b}}\) are randomly generated with support [0,1]. The variable \(permuting\) represents a random permutation operation.

Different search patterns of the individuals in 2D search space (a) The individual's active learning mode (b) The radiation pattern of role models (c) Global exploration mode

## 3.3.3 Exploitation phase

The learning process involves learners acquiring knowledge from historical experiences, while simultaneously enhancing the overall learning ability of the population. This is achieved through active learning from role models, who are identified as optimal individuals exhibiting effective teaching behaviors. Nonetheless, equitable benefits from these role models are not experienced by all individuals within the population due to limited capacity to acquire knowledge. Thus, it is imperative to ascertain the ideal number of beneficiaries derived from the role models to attain optimal enhancement efficacy. Psychological research has indicated that the average attention span of an individual typically ranges from 7 to 9 (Schoenewolf 1990 ; Bruner 1971 , 2009 ). Consequently, an excessive number of teachers can hinder the teaching process, leading to challenges in fully utilizing the instructional potential of role models. Considering the aforementioned analysis, it is evident that certain individuals actively learn from role models, while role models specifically instruct select individuals on particular occasions. The learning schematic diagrams for the algorithm are presented in Fig. 5 (a) and (b), accompanied by the corresponding mathematical model illustrated in Formula ( 9 ).

where, \(rand\) and \(r\) are randomly generated with support [0,1]. \(x_{best}^{t}\) denote the position of the role model. The degree to which the learner acquires knowledge from the role model is modulated by the learning factor denoted as” \(\beta\) ”, which is computed using Formula ( 10 ). Moreover, we also compute the \(x_{average}^{t}\) using Formula ( 11 ).

where, we investigate the relationship between the number of objects taught by role models \(\left(sub\right)\) , the algorithm effectiveness \(\left(sub=3\right)\) , and a random integer \(\left(randi\right)\) . The experimental measurement process reveals that the algorithm achieves optimal performance when \(sub = 3\) . Additionally, \(randi\) is a randomly selected integer within the range of 1 to sub , where sub represents a specific value.

During this stage, learners enhance their knowledge acquisition through the teaching method employed by role models. This directed learning from role models effectively enhances the overall learning ability of the learners.

## 3.3.4 Cross boundary processing

During the search process, individuals in the population may exceed the constraint of the problem domain, so it is necessary to handle individuals outside the boundary. Meanwhile, different boundary processing methods have the particular effect on the efficiency. The processing method adopted by this algorithm is shown in Formula ( 12 ):

where, \(lb\) and \(ub\) correspond to the upper and lower bound values, respectively.

## 3.4 The procedure of LSA

Based on the earlier theoretical analysis, the LSA consists of two primary stages: global discovery and regional growth. In the global discovery stage, the algorithm simulates learning behaviors by drawing upon previous learning and unique learning tendencies observed in contemporary society. Conversely, the regional growth stage emulates instruction and acquisition behaviors. In this section, we present a comprehensive overview of the key procedures employed by LSA. Additionally, to assist with implementation, we provide the pseudo-code of the algorithm in Fig. 6 and Algorithm 1.

Flow chart of LSA algorithm

The pseudo code of LSA

## 3.5 Time complexity analysis

The time complexity of the proposed LSA algorithm serves as a crucial performance indicator. The entire LSA process consists of three key steps: the initial setup, evaluation of fitness value, and refinement of the learning search process. The computation complexity of the initialization process is \(O(nPop)\) . During each iteration, approximately \(\delta \cdot nPop\) individuals engage in global development operations, \((1 - \delta ) \cdot nPop/2\) participants utilize the role model guidance strategy, and \((1 - \delta ) \cdot nPop/2\) individuals utilize the active learning strategy from role models. As a result, the time complexity of LSA can be estimated as approximately \(O(nPop) + O(t_{\max } \cdot \delta \cdot nPop) + O(t_{\max } \cdot (1 - \delta ) \cdot nPop/2) + O(t_{\max } \cdot (1 - \delta ) \cdot nPop/2) = O(n + t_{\max } \cdot nPop)\) .

A heatmap visually represents data using color gradients to illustrate variations in data magnitude, aiding in the comprehension of correlations and trends. In Fig. 7 , darker hues indicate longer algorithm runtimes. Here, we examine the time efficiency of the LSA algorithm in addressing optimization problems from two angles. Firstly, we analyze its overall time complexity, which hinges on factors such as population size, iteration count, and problem intricacy. Figure 7 (a) illustrates that, with a set number of iterations, larger populations incur greater time costs (as depicted by colors closer to red), albeit with improved algorithmic precision (as indicated in Appendix Table 28 ). Conversely, Fig. 7 (b) demonstrates that, with a fixed evaluation count, solving more complex problems consumes more time (evident in functions F26 and F27, represented by darker colors), albeit with marginal gains in precision (as shown in Appendix Table 29 ). Secondly, we scrutinize algorithm runtime and precision through specific execution strategies, comparing outcomes with varied balance control values (denoted by δ) using formula ( 5 ). Figure 7 reveals that setting δ to 0 results in maximal execution times (reflected by red hues), while δ set to 1 minimizes execution times (reflected by blue hues). This disparity arises from δ’s influence on the algorithm’s tendency towards local exploitation (Eqs. ( 10 )-( 11 )) or global exploration (Eq. ( 6 )), impacting time costs. However, employing a fixed δ value compromises search precision compared to formula ( 5 ) (as detailed in Appendix Table 30 ).

Time consumption of the LSA algorithm under different parameters. a Time spent executing CEC 2014 functions with fixed number of iterations. b Time spent executing CEC 2014 functions with fixed number of evaluations. c Time spent executing CEC 2014 functions with different values of δ

## 3.6 Convergence analysis

3.6.1 markov chain model of the lsa algorithm.

Definition 1: The state of a learning agent and state space of a learning agent.

The state of a learning agent is composed of the position \(x\) and the global best position \(best\) , denoted as \(I = (x,best)\) , where \(x \in A,best \in A\) and \(f(best) \le f(x)\) , \(A\) is a feasible solution within the spatial range. The possible states of all learning agents constitute the state space of a learning agent, denoted as:

Definition 2: The state of a learning swarm and state space of a learning swarm.

The states of all individual learners within a learning swarm constitute the state of the learning swarm. The state of the \(m\) learners in the learning swarm at time \(t\) is denoted as \(s_{t} = (I_{{_{1} }}^{t} ,I_{{_{2} }}^{t} ,...,I_{{_{i} }}^{t} ,...I_{{_{m} }}^{t} )\) , where \(I_{{_{i} }}^{t}\) represents the i -th learner in the population at time \(t\) , and \(m\) is the total number of learners in the population. The collection of all possible states of the learning swarm forms the state space of the learning swarm, denoted as:

Definition 3: The state transition of individual learners.

For \(\forall I_{i} = (x_{i} ,best_{i} ){|} \in I,\forall I_{j} = (x_{j} ,best_{i} ){|} \in I\) , the state \(I_{i}\) of an individual learner transitions to another state \(I_{j}\) in one step, denoted as \(T_{I} (I_{i} ) = I_{j}\) .

Theorem 1: In the LSA algorithm, the probability of the state transition from state \(I_{i}\) to state \(I_{j}\) of an individual learner can be expressed as:

In the LSA algorithm, the algorithm primarily consists of two phases: the exploration phase and the exploitation phase. The exploitation phase is further comprised of two distinct update modes: “Active learning” and “Role models teaching”.

In each position update strategy, the state of an individual learner is formed by the position \(x\) and the global best position \(best\) . Therefore, the corresponding one-step transition probability also varies. Considering that the learner’s vector is multidimensional and the population forms a set of points in a multidimensional hyperspace, the process of the learner’s position change can be regarded as the transformation between points in this hyperspace.

According to Definition 3 and the geometric interpretation of the LSA algorithm, the one-step transition probability from state \(I_{i}\) to another state \(I_{j}\) in the exploration phase is given by:

where, the one-step transition probability from the individual’s global best solution to another state is given by:

The probability of the learning individual transitioning from position \(x_{i}\) to position \(x_{j}\) through the exploration phase search strategy is:

The transition probability from state \(I_{i}\) to another state \(I_{j}\) through the active learning strategy is:

where, the probability of the learning individual transitioning from position \(x_{i}\) to position \(x_{j}\) in one step is:

The transition probability from state \(I_{i}\) to another state \(I_{j}\) through the Role models teaching strategy is:

Definition 4: The state transition of learning community.

For \(\forall s_{i} \in S,\forall s_{j} \in S\) , in the iteration of LSA algorithm, the learning community transition from state \(s_{i}\) to another state \(s_{j}\) in a single step, denoted as \(T_{S} (s_{i} ) = s_{j}\) . The transition probability for the learning community state \(s_{i}\) to transition to another state \(s_{j}\) in a single step is:

where, \(m\) represents the number of individuals in the population. This equation states that the one-step transition from learning group state \(s_{i}\) to state \(s_{j}\) is the transition of the individual states from all individuals in group space \(s_{i}\) to the corresponding individual states in \(s_{j}\) .

## 3.6.2 Convergence analysis of LSA algorithm

Definition 5: Markov chain.

In a stochastic process, let process \({\{x_{n} {,}n \in T\}}\) have parameter set T as a discrete time series, denoted as \(T = \{ 0,1,2,...\}\) , where the entire set of possible values \(x_{n}\) constitutes a discrete state space \(I = {\{i_{1} {,}i_{2} ,i_{3} ,...\}}\) . If for any integer \(n \in T\) and any \(i_{1} {,}i_{2} ,i_{3} ,...,i_{n + 1} \in I\) , the following conditional probability \(p{\{x_{n + 1} = i_{n + 1} {|}x_{n} = i_{n} \}}\) holds, then \({\{x_{n} {,}n \in \mathrm{T}\} }\) is termed as a Markov chain.

Definition 6: Finite Markov chain.

If the state space \(I\) is finite, then the Markov chain is referred to as a finite Markov chain.

Definition 7: Homogeneous Markov chain.

\(p{\{x_{n + 1} = i_{n + 1} {|}x_{n} = i_{n} \}}\) represents the conditional probability that the system, in state \(i_{n}\) at time \(n\) , transitions to a new state \(x_{n + 1}\) . If this probability depends only on the state at time \(n\) and not on time \(n\) , then the Markov chain is referred to as a homogeneous Markov chain.

Theorem 2: In the LSA algorithm, the state sequence \(\left\{ {s(n);n \ge 0} \right\}\) of the learning community is a finite homogeneous Markov chain.

According to Definition 4, it is known that in the state transition of the learning community, there exists a state \(\forall {\text{s}}(n) \in S,\forall {\text{s}}(n + 1) \in S\) in the sequence \({\{ \text{s(n);n}} \ge {0\}}\) , and its transition probability \(p(T_{S} (s(n)) = s(n + 1))\) is determined by the transition probabilities \(p(T_{S} (I_{i} (n)) = I_{j} (n + 1))\) of all learning individuals in the community. From Eqs. ( 15 ) to ( 22 ), it is known that the state transition probability of any individual in the learning community is only related to the control factors \(\delta\) , the states \((x_{i} ,best)\) at time \(n\) , \(x_{average}\) , a random number \(r,r_{1}\) between [0,1], and \(\beta\) , but is independent of time \(n\) . Based on the above analysis, the state transition probability \(p(T_{S} (s(n)) = s(n + 1))\) of the learning community only depends on the individual states at time \(n\) , therefore, the sequence \({\{ \text{s(n);n}} \ge {0\}}\) exhibits the Markov property.

From Eqs. ( 16 ) to ( 22 ), it can be seen that \(p(T_{S} (s(n)) = s(n + 1))\) is independent of the time \(n\) . Combining with (i), it can be inferred that the sequence \({\{ \text{s(n);n}} \ge {0\}}\) is a homogeneous Markov chain.

When optimizing any problem in a computer, the variables used to describe the optimization problem are represented with a certain precision, and the search space is finite. In any individual \(I_{i} = (x_{i} ,best)\) of the learners, the dimensions of \(x_{i}\) and \(best\) are finite, and \(x_{i}\) and \(best\) are also constrained by \([x_{\min } ,x_{\max } ]\) . Therefore, the space of learning individuals \(I\) is finite. And the learning community composed of \(m\) learning individuals is also finite.

Based on (i), (ii), and (iii), it can be inferred that the state sequence \({\{ \text{s(n);n}} \ge {0\}}\) of the learning community is a finite homogeneous Markov chain.

## Convergence criterion

The LSA algorithm belongs to the category of stochastic search algorithms, thus in this paper, the convergence behavior of the LSA algorithm is determined using a convergence criterion based on random algorithms (Solis and Wets 1981 ).

For the optimization problem < A , f > , where A is the feasible solution space and f is the fitness function, if there is a stochastic optimization algorithm D and the result of the k -th iteration is \(x_{k}\) , then the result of the next iteration is \(x_{k} { = }D(x_{k} ,\zeta )\) , where \(\zeta\) represents solutions previously encountered during the iterative search process of algorithm D. The lower bound of the search is defined as:

where, \(v(x)\) represents the Lebesgue measure on the set \(x\) . The region of optimal solutions is defined as:

where, \(\varepsilon > 0\) and \(C\) are sufficiently large positive numbers. If the stochastic algorithm D finds one point in \(R_{\varepsilon ,M}\) , it can be considered that the algorithm has found an acceptable global optimal or approximately global optimal point.

Condition H1: If \(f(D(x_{k} ,\zeta ) \le f(x))\) , then \(\zeta \in A\) implies \(f(D(x_{k} ,\zeta ) \le f(\zeta ))\) .

Condition H2: For \(\forall B \in A\) , s.t. \(v(B) > 0\) , there exists \(\prod\limits_{k = 0}^{\infty } {(1 - u_{k} (B))} = 0\) , where \(u_{k} (B)\) is the probability measure of the k - th iteration search solution of algorithm D on set \(B\) .

Theorem 3: Let \(f\) be measurable, and let \(A\) be a measurable subset of \(R^{n}\) . Suppose algorithm D satisfies conditions H1 and H2, and \(\{ x_{k} \}_{k = 0}^{\infty }\) is the sequence generated by algorithm D. Then, there exists a probability measure \(\mathop {\lim }\limits_{k \to \infty } P(x_{k} \in R_{\varepsilon ,M} ) = 1\) ,where \(R_{\varepsilon ,M}\) is the optimal region, i.e., algorithm D globally converges.

## LSA Algorithm Convergence

Theorem 4 : The LSA algorithm satisfies condition H1.

Proof: In the LSA algorithm, individual current optimal positions are updated at each iteration, denoted as follows.

Therefore, the LSA algorithm preserves the best position of the population at each iteration, satisfying condition H1.

Definition 8: Set of optimal states of learners, denoted as G.

Let \(g^{*}\) be the optimal solution of the optimization problem < A , f > , and let \(G = \{ s{ = (}x{)|}f(x) = f(g^{*} ),s \in S\}\) denote the set of optimal states of learners. If \(G = S\) , then each solution in the feasible solution space is not only feasible but also optimal. In this case, optimization is meaningless, and the following discussions are based on \(G \subset S\) .

Definition 9: Absorbing state Markov chain.

Based on the population sequence from the LSA algorithm, a Markov chain \(\{ s(t),t \ge 0\}\) and the set of optimal states \(G = S\) are defined. If \(\{ s(t),t \ge 0\}\) satisfies the conditional probability \(p\{ x_{k + 1} \notin G|x_{k} \in G\} = 0\) , then this Markov chain is called an absorbing state Markov chain.

Theorem 5: The Markov chain generated by the LSA algorithm is an absorbing state Markov chain.

In the optimization of LSA, the update of individuals adopts the mechanism of preserving the best individuals. That is, only when the fitness value of the current best individual is better than the fitness value of the original individual, will the original best individual be replaced, as shown in Eq. ( 26 ). This ensures that in each iteration of the algorithm’s evolution process, the newly generated individuals are not inferior to those generated before. Therefore, the conditional probability \(p\{ x_{k + 1} \notin G|x_{k} \in G\} = 0\) is satisfied, that is, the population sequence \(\{ s(t),t \ge 0\}\) of the LSA algorithm forms an absorbing state Markov chain.

Definition 10: Let set \(D\) be a non-empty subset of the state space \(S\) . If \(\forall i \in D,\forall j \notin D\) , there exists \(p(i \notin D|j \in D){ = 0}\) , then \(D\) is called a closed set.

Definition 11: Let the set of optimal learning individual states be denoted as \(M\) , the set of optimal learning group states be denoted as \(G\) , and the global optimal solution of the optimization problem be denoted as \(best^{*}\) , then

where, \(I_{{\text{t}}}^{*} \in S\) represents the optimal learning individual state.

where \({\text{S}}_{n}^{*} = {\text{(I}}_{{_{1} }}^{*} {\text{,I}}_{{_{2} }}^{*} {,}...{\text{,I}}_{{_{i} }}^{*} {,}...{\text{I}}_{{_{m} }}^{*} {)}\) represents the optimal state of the population.

Theorem 6: The set of optimal states for individual learning, denoted as \(M\) , is a closed set.

Let the learning individual state \(I_{{^{i} }}^{n} = (x_{i}^{n} ,best^{*} )\) be the optimal state. According to the execution strategy of the LSA algorithm, it is evident that the next moment’s state \(I_{{^{i} }}^{{n{ + 1}}} = (x_{i}^{{n{ + 1}}} ,best^{*} )\) is also the optimal state. This can be concluded based on formulas ( 16 )-( 22 ).

In other words, \(\forall I_{{^{i} }}^{n} \in M,I_{j}^{n + 1} \notin M\) , \(p(I_{{^{i} }}^{n} \to I_{{^{j} }}^{n + 1} ) = {0}\) . Therefore, the set \(M\) of optimal states for the individual learner is a closed set.

Theorem 7: In the LSA algorithm, the set of optimal states for the learning population, \(G\) , is a closed set.

\(\forall s_{i} \in G,\forall s_{j} \notin G,s_{j} \in S\) , for any step size \(l,l \ge 1\) , according to the Chapman-Kolmogorov equation, we can obtain:

where, \(s_{ru - 1} \in G,s_{ru} \notin G,1 \le u \le l\) , it can be inferred from Definition 4:

Due to \(\exists I_{k}^{ru - 1} \in M,I_{k}^{ru} \notin M\) , then \(f(I_{k}^{ru} ) > f(I_{k}^{ru - 1} ) = f(I^{*} )\) . According to Eq. ( 17 ), \(p_{*} (best_{k}^{ru - 1} \to best_{k}^{ru} ) = 0\) , so \(p(T_{S} (s_{ru - 1} ) = s_{ru} ){ = 0}\) , which means \(p(i \notin G|j \in G){ = 0}\) . Therefore, \(G\) is a closed set in the \(S\) space.

Definition 12: For any \(n \ge 1\) , when \(l \ge n + 1\) , if \(best^{l} { = }best^{n}\) always holds true, \(s(n) \in S\) is referred to as an absorbing state. A set \(H\) constructed solely from a single absorbing state is called a closed set.

Theorem 8: Let \({\{ \text{s(n),n}} \ge {1\}}\) be a Markov chain representing the state sequence of a learning population, and \(\Lambda { = }G \cup H\) be a closed set. When \(\sum\limits_{n = 1}^{\infty } {p_{S} {\{ \text{s(}}n{)\} <\, }\infty }\) , then \(p(\mathop {\lim }\limits_{n \to \infty } {\text{s(}}n{)} \in \Lambda ){ = }1\) .

Based on the fact that all \(G,H\) are closed sets, it follows that \(\Lambda { = }G \cup H\) is also a closed set. Assuming that the learning population is in set \(\Lambda\) at time \(n\) and in state \(S(n{ + 1})\) at time \(n{ + 1}\) while being in state \(S(n)\) , we can conclude \(p_{S} {\{ \text{s(n + 1)}} \notin \Lambda {\text{|s(n)}} \in \Lambda {\} \text{= 0}}\) . Therefore,

If the learning population is in state \(S(n){ = (}I_{1}^{n} {,}I_{2}^{n} {,}...{,}I_{m}^{n} {)} \notin \Lambda\) , then \(\forall i \in [1,m],I_{i}^{n} \notin G,and \, I_{i}^{n} \notin H\) holds.

By summing up \(n\) , we can obtain

So, when \(n \to \infty\) and \(\sum\limits_{n = 1}^{\infty } {p_{S} {\{\text{ s(}}n{)\}\text{ < }}\infty }\) , \(p(\mathop {\lim }\limits_{n \to \infty } {\text{s(}}n{)} \in \Lambda ) = {1 - }p(\mathop {\lim }\limits_{n \to \infty } {\text{s(}}n{)} \notin \Lambda ){ = }1\) .

Theorem 9 : The LSA algorithm converges to the global optimum.

Proof: According to Theorem 4, it is known that the LSA algorithm satisfies condition H1. By Theorem 8, it is known that the probability of the LSA algorithm continuously searching for the global optimum for an infinite number of times is zero, thus there exists an \(\prod\limits_{k = 0}^{\infty } {(1 - u_{k} (B))} = 0\) satisfying condition H2. According to Theorem 3, it can be concluded that the LSA algorithm is a globally convergent algorithm.

## 3.7 The difference between TLBO and LSA

Teaching–learning-based optimization (TLBO) and Learning Search Algorithm (LSA) share common features as population-based algorithms inspired by human learning behavior. However, they diverge significantly in several aspects. Firstly, TLBO draws inspiration from the classroom teaching model, where knowledge transfer occurs bidirectionally: students learn from teachers, and teachers impart knowledge to students through direct instruction. Conversely, LSA predominantly acquires knowledge through individual learning from historical experiences (including both past and contemporary sources) and individuals emulating role models around them, with these role models disseminating knowledge to those receptive to specific learning aspects. Hence, the learning mechanisms of the two algorithms differ. Secondly, TLBO adopts a uniform knowledge dissemination approach throughout the entire population, overlooking individual physiological traits, which can hinder knowledge acquisition. In contrast, LSA, during its developmental phase, fully integrates learners’ unique attributes into the acquisition process, leveraging their ability to absorb knowledge from role models and learn from exceptional individuals. Thirdly, TLBO lacks distinction between global exploration and local exploitation phases, with all individuals following uniform learning and teaching strategies. In contrast, LSA embodies both phases and achieves a harmonious balance between global exploration and local exploitation through adaptive adjustment of balancing factors. Lastly, owing to its diverse learning strategies, LSA surpasses TLBO in search results for optimization problems like Unimodal Functions, Multimodal Functions, Hybrid Functions, and Composition Functions. This comprehensive superiority stems from LSA’s multifaceted learning approaches.

## 4 Experiment and simulation

To estimate the adequacy of the proposed approach, we conducted a comparative analysis against other state-of-the-art algorithms. Our implementation of the LSA algorithm was conducted utilizing MATLAB R2016a. The experimental setup comprises a personal computer equipped with an AMD Ryzen 74700G with Radeon Graphics and 12 GB main memory. The study employed a diverse set of test problems, including 30 IEEE CEC2014 and 10 IEEE CEC2020 benchmark functions, 6 challenging real-world engineering design optimization problems, and 15 feature selection datasets from UCI. Table 2 presents a collection of 40 minimization functions called CEC2014 and CEC2020, which is a powerful set of real-parameter optimization benchmarks. These functions effectively mimic real-world optimization problems. To assess the performance of the LSA, we selected 9 prominent swarm intelligence optimization algorithms and 11 powerful recently developed algorithms as a comparison. The population size ( nPop ) was set at 50, and the number of evaluations (FEs) was set at 1000* nPop . The search dimension was fixed at 10, whereas the parameter values for the comparison algorithms are presented in Table 3 .

In order to ensure the reliability of our results, we executed each test 20 times independently and highlighted the best-performing outcomes in the data statistics tables. Furthermore, to investigate the statistical significance of our results, we employed the Wilcoxon test at a significance level of 0.05. The symbols “/ = /-” were adopted to indicate whether the LSA algorithm is superior, equal to, or inferior to the comparison algorithms in terms of performance.

## 4.1 Results of IEEE CEC 2014 and CEC 2020

This section presents a comprehensive analysis of the proposed LSA algorithm as well as various state-of-the-art original and enhanced algorithms with improved search performance. The IEEE CEC 2014 and CEC 2020 benchmark functions have been chosen as the evaluation benchmarks for this study.

## 4.1.1 The search process experiment of LSA

In this subsection, the proposed LSA algorithm is utilized to solve a range of benchmark functions, and a detailed analysis of the optimization process is conducted.

Figure 8 (a) illustrates the three-dimensional mathematical model of the benchmark function. Notably, the mathematical model of F2 exhibits relative simplicity, while the remaining four functions possess a higher level of complexity. Figure 8 (b) demonstrates the search trajectory of the proposed algorithm from a top-down perspective. The larger blue dots represent the best search positions during the search process, while the other colored dots denote the positions of the search individuals throughout iterations. Additionally, Fig. 8 (c) depicts the progression of the average fitness value for the entire population. It is evident from Fig. 8 (b) that for both simple and complex functions, numerous historical search trajectories of individuals are concentrated in proximity to the global optimum, effectively ensuring the thoroughness of local search. Moreover, several discrete colored points are scattered in other regions, demonstrating the capability of the algorithm to perform global exploration and avoid being trapped in local optima. The convergence of the average fitness curve in all mathematical models is evident in Fig. 8 (c), underscoring the robust search capability of the LSA algorithm. In Fig. 8 (e), it’s evident that the LSA algorithm achieves a balanced approach between Exploration and Exploitation over time (the computational methods for Exploration and Exploitation are based on literature (Cheng et al. 2014 ; Hussain et al. 2019 )). As iterations progress, Exploitation steadily approaches 100 while Exploration declines towards 0. Analysis of functions F2, F8, and F15 reveals a rapid decrease in population diversity, showcasing the algorithm’s robust exploitation abilities. Conversely, for hybrid and composite functions like F25 and F27, population diversity fluctuates notably (the computational methods for calculating population diversity are based on literature (Cheng et al. 2014 ; Hussain et al. 2019 ).), consistently remaining at a higher level, demonstrating the algorithm’s strong global exploration capabilities (as shown in Fig. 8 (d)).

Visualization of the algorithm’s search process. a 3-dimensional mathematical models of the function. b The search trajectory during the optimization process is displayed from a top view. c Changes in the average fitness value of the population. d Population diversity. e Exploration and exploitation capabilities of the algorithm

## 4.1.2 Comparison with well-known original algorithms

This subsection tests the performance of the target algorithm LSA and other well-known original algorithms, including GWO (Mirjalili et al. 2014 ; Long et al. 2020 ), HHO (Chen et al. 2020b ), HPO (Naruei et al. 2022 ), MFO (Mirjalili 2015b ), SSA (Mirjalili et al. 2017 ; Abualigah et al. 2020 ), BWOA (Hayyolalam and Kazem 2020 ), SOA (Ramshanker and Chakraborty 2022 ), TLBO (Rao et al. 2011 ), and TSA (Kaur et al. 2020 ). In the CEC 2014 benchmark functions, F1-F3 are single-mode functions, and they are used to test the development ability of the algorithm; F4-F15 are multi-mode functions, and they have multiple local optimal values. If the algorithm's exploration ability is not sufficient, it will converge prematurely and quickly, or even fall into a local optimum, resulting in low convergence accuracy. F16-F21 are mixing functions, and F22-F30 are composition functions. These two types of functions are more challenging than the previous two types of functions, so they can better reflect the search performance of the algorithm. F31-F40 are the CEC 2020 benchmark functions. Table 4 presents the average and variance results of each algorithm for solving the CEC 2014 and CEC 2020 benchmark functions, and the best results are highlighted in bold. Figure 9 shows the convergence effect of the LSA and the comparison algorithms in solving 12 benchmark functions. Figure 10 illustrates the Friedman rank sum sorting results of each algorithm. Table 5 and Fig. 11 show the results of the Wilcoxon rank-sum experiment with a 5% significance.

individual's active learning mode

The convergence curves of LSA and other original algorithms on 12 benchmark functions

The results of the Friedman test for the LSA and other original algorithms

The graphical representation of the Wilcoxon signed rank test results of the LSA algorithm compared to the original algorithms

It can be seen from Table 4 that LSA ranks first on 24 functions and ranks second on six functions, with an overall rank of #1 (average rank AVG of 2.1). The results of the p values in Table 5 indicate that compared with the algorithms GWO, HHO, HPO, SSA, BWOA, MFO, SOA, TSA, and TLBO, the LSA obtained 27, 37, 35, 33, 38, 36, 36, 40, and 24 victories, showing a significant difference. Although the LSA algorithm obtained 3, 5, and 7 optimal results when solving Unimodal Functions (F1-F3), Multimodal Functions (F4-F16), and Hybrid functions (F17-F22) problems respectively, its overall optimal result rate is only (3 + 5 + 7)/22 = 68.18%. Moreover, when solving Composition Functions (F23-F30) problems, LSA only achieved 2 optimal results, whereas SOA obtained 3 optimal results. This indicates that SOA has certain advantages in solving Composition Functions problems. This also confirms the “No Free Lunch” (NFL) theorem, demonstrating that there is no one-size-fits-all solution for all optimization problems. However, compared to other algorithms, the LSA algorithm obtains the most optimal results in solving these two types of problems. The bubble chart in Fig. 11 vividly demonstrates the statistical superiority of the proposed algorithm over these well-known original algorithms in terms of search results.

According to the results of the Friedman rank sum test shown in Fig. 10 , the LSA algorithm achieved the highest rank among all algorithms, with a rank mean of 2.09. Overall, these experimental results indicate that the LSA performs superiorly on the CEC 2014 and CEC 2020 functions and is stronger than other comparison algorithms.

Figure 9 illustrates the convergence curves of benchmark functions for GWO, HHO, HPO, SSA, BWOA, MFO, SOA, TSA, TLBO, and the proposed LSA algorithm. Based on the convergence curves of functions F1, F3, F7, F13, F17, F18, F20, F22, F25, F32, F35, and F37, the LSA algorithm achieves the highest fitness values and the fastest convergence speed among these unimodal functions, multimodal functions, hybrid functions, and composition functions. In contrast, other algorithms fail to obtain global solutions due to being trapped in local optima. Therefore, the experimental results demonstrate that LSA effectively utilizes its exploitation capability for unimodal functions and exploration capability for multimodal functions. The incorporation of exploration and exploitation stages ensures the global convergence of the LSA algorithm.

The stability of an algorithm is also an important indicator of whether it is good or bad. To examine the reliability, we selected F1, F3, F10, F18, F21, F26, F21, F30, and F34 as the test subjects. From the box diagram in Fig. 12 , it can be seen that the box diagram of LSA algorithm is the most flat. This indicates that LSA algorithm has good stability.

The box plot of the results by well-known original algorithms

## 4.1.3 Comparison of LSA with recently proposed advanced improved algorithms

In this subsection, LSA is compared with 6 high-performance improved algorithms proposed in recent years, including GNHGWO (Akbari et al. 2021 ), GWOCS (Wang et al. 2022b ), HPSOBOA (Zhang et al. 2020 ), NCHHO (Dehkordi et al. 2021 ), PSOBOA (Zhang et al. 2020 ), HFPSO (Aydilek 2018 ), FDB-AGDE (Guvenc et al. 2021 ), dFDB-MRFO (Kahraman et al. 2022 ), AFDB-SFS (Duman et al. 2023 ), FDB-TLABC (Duman et al. 2022 ), TSALSHADE (Abdesslem layeb 2023 ), and ISSA (Dorian Sidea 2024 ).

Table 6 shows the experimental results of these 12 algorithms on CEC 2020 functions. Table 7 presents the p-value results of the Wilcoxon test. It can be seen from Table 6 that the LSA obtained an AVG value of 2.8, ranking first overall. Meanwhile, the p-values in Table 7 and Fig. 16 indicate that compared to the algorithms GNHGWO, GWOCS, HPSOBOA, NCHHO, HFPSO, PSOBOA, FDB-AGDE, dFDB-MRFO, AFDB-SFS, FDB-TLABC, and TSALSHADE, the LSA obtained 7, 6, 10,10,10,7,3,6,4,6, and 6 victories, respectively, showing a significant difference. The Friedman rank sum value of the LSA in Fig. 13 is 2.8, which is the smallest among all algorithms. The sorting results indicate that the LSA ranks first among all algorithms. These experimental results show that the LSA achieves superior performance on the CEC 2020 function and is stronger than other algorithms. The excellent performance of the LSA makes it a new optimizer that can be applied to solve complex and realistic optimization problems.

Friedman test was performed to compare the results of the LSA with other enhanced algorithms

Appendix Table 25 presents the results of LSA and ISSA algorithms in solving CEC 2014 problem. From appendix Table 25 , it can be observed that the LSA algorithm achieved victories in 28 test functions. Whether in single-modal, multi-modal, hybrid, or composite functions, the LSA algorithm outperformed the ISSA algorithm comprehensively.

The excellent convergence performance of the LSA algorithm is demonstrated in Fig. 14 , compared with GNHGWO, GWOCS, HPSOBOA, NCHHO, HFPSO, PSOBOA, FDB-AGDE, dFDB-MRFO, AFDB-SFS, FDB-TLABC, and TSALSHADE. The main reason for this achievement is the combined effect of the LSA algorithm’s local exploitation strategies (Active learning strategy and Role models teaching strategy) and global exploration strategy.

LSA and other improved algorithms’ convergence curves on 9 CEC 2020 benchmark functions

In summary, LSA shows strong competitiveness compared with the top-performing meta-heuristic algorithms put forth in recent years. This indicates that our proposed LSA obtains good results in dealing with numerical optimization problems.

To test the stability of the LSA algorithm, we selected F31, F34, F35, F36, F37, and F38 as the test subjects. According to Fig. 15 , compared with the distributions of optimal solutions, the box diagram of LSA algorithm is the most flat, indicating its good stability (Fig. 16 ).

The box plot of the results by recently proposed advanced improved algorithms for CEC 2020

The graphical representation of the Wilcoxon signed rank test results of the LSA algorithm compared to the other improved algorithms

## 4.1.4 Consumption time cost analysis

Tables 8 and 9 present the time consumption (unit: second) of each algorithm when solving the CEC2014 and CEC2020 functions. To more intuitively show the results, the proportion of time consumption is shown in Figs. 17 and 18 .

The proportion of cost time for LSA compared to the original algorithms

The proportion of cost time for LSA compared to the improved algorithms

It can be seen from Table 8 that compared with the original benchmark algorithm, the average time consumption of the LSA when solving the CEC 2014 and CEC 2020 problems is 0.305 s, ranking in the middle of the nine algorithms. Figure 17 illustrates the percentage of the cost time by each algorithm in executing different test functions, providing a more intuitive reflection of the performance of the proposed algorithm in terms of execution time. Meanwhile, Table 9 and Fig. 18 demonstrate that compared with the newly proposed improved algorithm, the average time consumption of the LSA in solving the CEC 2014 and CEC 2020 problems ranks 6th. The relatively high time expense of the target algorithm is mainly due to the computation cost of determining the number of beneficiaries in the exemplar instructing process, which is calculated using Formula ( 11 ).

## 4.1.5 Parameter sensitivity analysis

Different parameters may have different influences on the performance of the algorithm. To explore the influence of the parameter sub (the number of subjects taught by role models) on the performance of the LSA, this paper selected different values of sub, i.e., 2, 3, 4, 5, 6, 7, and 8, to conduct experiments, and the values of other parameters remained the same. In addition, we discussed the impact of different values of parameter \(y^{0}\) and parameter \(\delta_{init} > 1\) on the algorithm. The test case was obtained from CEC 2014 and CEC 2020. Each test case was independently run 20 times, and the final results are presented in Table 10 , where the optimal results are highlighted in bold.

It can be seen from Table 10 that when sub equals 3, the number of average optimal values obtained by the algorithm is the largest, indicating that the number of role models in teaching must be controlled within a certain range so that the teaching object effect can reach the best level. This is consistent with the analysis results in Sect. 3.3.3 .

From the statistical results in Appendix Table 23 , it can be observed that the best results were achieved when \(y^{0} = 0.75\) , with a total of 21 instances. On the other hand, the least favorable results were obtained when \(y^{0} = 0.5\) . Therefore, the value of \(y^{0}\) does have some impact on the target algorithm, although the fluctuation in the results of the problem-solving process is not significant.

According to the statistical results from Appendix Table 24 , it can be observed that when \(\delta_{init} = 1\) , the proposed algorithm achieves the best results with a count of 15, outperforming the cases where \(\delta_{init} > 1\) . The analysis suggests that this improvement can be attributed to the excessive global exploration carried out by the algorithm during the iterative process when \(\delta_{init} > 1\) , which consequently undermines the algorithm’s capability for local exploitation.

## 4.2 Results of real-world constrained optimization problems

The design of many real-world engineering structures is usually limited by various conditions. When solving such problems, engineers need to deal with additional constraints. To test the LSA algorithm in solving engineering real optimization problems, this paper selected 6 real-world engineering design problems, such as speed reducer design et al.

## 4.2.1 Speed Reducer Design (SRD)

The goal of SRD is to design a reducer with a minimum weight. SRD contains seven design variables and eleven inequality constraints. A detailed description of this problem can be found in the reference (Dhiman and Kumar 2017 ). The mathematical model of the problem is as follows:

where \(2.6 \le x_{1} \le 3.6,0.7 \le x_{2} \le 0.8,17 \le x_{3} \le 28,7.3 \le x_{4} \le 8.3,7.3 \le x_{5} \le 8.3,2.9 \le x_{6} \le 3.9,5 \le x_{7} \le 5.5\) .

When testing LSA to solve this problem, this paper selected some well-known meta-heuristic algorithms proposed in recent years including STOA (Dhiman and Kaur 2019 ), TQA (Chen et al. 2022 ), HS (Dhiman and Kumar 2017 ), ESMA (Örnek et al. 2022 ), GSA (Karami et al. 2021 ), EJAYA (Zhang et al. 2021b ), FDB-AGDE (Guvenc et al. 2021 ), dFDB-MRFO (Kahraman et al. 2022 ), AFDB-SFS (Duman et al. 2023 ), FDB-TLABC (Duman et al. 2022 ),and TSALSHADE (Abdesslem layeb 2023 ) as comparison algorithms. Table 11 shows the statistical results of the LSA and comparison algorithms for solving this problem. It can be seen from Table 11 that the LSA obtained the result of 2986.064(consistent with the results of FDB-AGDE, dFDB-MRFO, AFDB-SFS, and FDB-TLABC), which is the best among all compared algorithms.

## 4.2.2 The Tension/Compression Spring Design (TCSD)

In the design of engineering problems, in addition to considering the optimal objective function of the mathematical model of the designed product, designers also need to consider the corresponding constraints. TCSD is a classic engineering design problem, and its goal is to minimize the weight of the designed product. In this problem, there are three variables and four inequality constraints (Faramarzi et al. 2020 ), and the mathematical model is as follows:

where \(0.05 \le x_{1} \le 2,0.25 \le x_{2} \le 1.3,2 \le x_{3} \le 15\) .

Various intelligent algorithms have been used to solve this engineering design problem, such as EO (Faramarzi et al. 2020 ), RL-BA (Meng et al. 2019 ), DDAO (Ghafil and Jármai 2020 ), SDO (Zhao et al. 2019 ), AFA (Dhrubajyoti et al. 2021 ), mGWO (Shubham and Kusum 2020 ), PFA (Yapici and Cetinkaya 2019 ), GCHHO (Song et al. 2021 ), VAGWO (Farshad et al. 2022 ), ExPSO (Khelil et al. 2022 ), TEO (Kaveh and Dadras 2017 ), QS (Zhang et al. 2018 ), FDB-AGDE (Guvenc et al. 2021 ), dFDB-MRFO (Kahraman et al. 2022 ), AFDB-SFS (Duman et al. 2023 ), FDB-TLABC (Duman et al. 2022 ),and TSALSHADE (Abdesslem layeb 2023 ). Table 12 presents the solution results of the LSA and the above comparison algorithms to this problem, and the best optimization results are marked in bold. It can be seen from Table 12 that in terms of Best, Mean, Worst, and Std, the LSA obtains the best solution result. Table 13 indicates that the best result of the LSA for solving this engineering problem is 0.009872, and the values of \(x_{1} ,x_{2}\) and \(x_{3}\) are 0.05, 0.374433, and 8.546567, respectively.

## 4.2.3 Pressure Vessel Design (PVD)

PVD is another classic constrained optimization problem with four optimization variables and four constraints. Its goal is to minimize the total cost of materials in forming and welding cylindrical vessels. The mathematical model is as follows:

where \(0 \le x_{1} ,x_{2} \le 100,10 \le x_{3} ,x_{4} \le 200\) .

To explore the performance of the LSA in solving PVD, this paper selected some high-performance improved meta-heuristic algorithms recently proposed as comparison algorithms, including BIANCA (Montemurro et al. 2013 ), G-QPSO (Santos Coelho 2010 ), HAIS-GA (Coello and Cortés 2004 ), CB-ABC (Brajevic 2015 ), NHAIS-GA(Bernardino et al. 2008 ), DEC-PSO (Chun et al. 2013 ), T-Cell (Aragón et al. 2010 ), FDB-AGDE (Guvenc et al. 2021 ), dFDB-MRFO (Kahraman et al. 2022 ), AFDB-SFS (Duman et al. 2023 ), FDB-TLABC(Duman et al. 2022 ),and TSALSHADE (Abdesslem layeb 2023 ). It can be seen from Tables 14 and 15 that the LSA, FDB-AGDE, dFDB-MRFO, AFDB-SFS, and FDB- TLABC achieve the best optimization result, and the objective function value is 5885.333, which is obviously better than those of other comparative algorithms.

## 4.2.4 Three-bar Truss Design (TTD)

The TTD problem is another classic minimization problem in engineering design, and its structure can be found in the reference (Ghasemi et al. 2022 ). It has 3 variants and 4 inequality constraints, and its goal is to minimize the weight of the three trusses.

where \(0 \le x_{1} \le 1,0 \le x_{2} \le 1,\) \(l = 100cm,P = kN/cm^{2} ,\sigma = 2kN/cm^{2}\) .

Table 16 shows the statistical results of each algorithm to solve the TTD problem, where the best results are highlighted in bold. According to the statistical results, LSA, FDB-AGDE, dFDB-MRFO, AFDB-SFS and FDB-TLABC obtained the best optimal value (263.8523) among all the 15 comparison algorithms.

## 4.2.5 Cantilever Beam Design (CBD)

The CBD is a civil engineering structural design problem consisting of 5 hollow elements, each of which has an equal thickness. Its objective function is to minimize the weight of the cantilever beam. The mathematical model of this problem is represented below:

where \(b = (b_{1} ,b_{2} ,b_{3} ,b_{4} ,b_{5} ) = (67,37,19,7,1)\) \(0.01 \le x_{i} \le 100,i = 1,...,5\) .

To test the ability of the proposed LSA to solve this problem, this paper selected 5 algorithms, STOA (Dhiman and Kaur 2019 ), TQA (Chen et al. 2022 ), GCA_I (Kumar et al. 2020 ), GCA_II (Kumar et al. 2020 ), SMA (Li et al. 2020c ), FDB-AGDE (Guvenc et al. 2021 ), dFDB-MRFO (Kahraman et al. 2022 ), AFDB-SFS (Duman et al. 2023 ), FDB-TLABC (Duman et al. 2022 ),and TSALSHADE (Abdesslem layeb 2023 ) as comparison algorithms. Table 17 presents the statistical results of each algorithm’s performance in addressing this problem, with the best results highlighted in bold. From the statistical outcomes depicted in Table 17 , it is observed that LSA, FDB-AGDE, dFDB-MRFO, AFDB-SFS, and FDB-TLABC exhibit the most favorable optimization outcomes. In other words, these algorithms achieved the best optimal values among all the evaluated approaches.

## 4.2.6 Car Side Impact Design (CSID)

The goal of CSID is to minimize weight, which is related to 11 variables and 10 inequality constraints. Its detailed description can be found in the reference (Huang et al. 2015 ). Meanwhile, EJAYA (Zhang et al. 2021b ), TLCS (Huang et al. 2015 ), AOSMA (Naik et al. 2021 ), WOAGWO (Mohammed and Rashid 2020 ), PGJAYA (Yu et al. 2019 ), ERao-1 (Jian and Zhu 2021 ), CLJAYA (Zhang and Jin 2022 ), FDB-AGDE (Guvenc et al. 2021 ), dFDB-MRFO (Kahraman et al. 2022 ), AFDB-SFS (Duman et al. 2023 ), FDB-TLABC (Duman et al. 2022 ),and TSALSHADE (Abdesslem layeb 2023 ) were selected to test their performance in solving this problem. The mathematical model of this problem is as follows.

where \(0.5 \le x_{1} ,x_{2} ,x_{3} ,x_{4} ,x_{5} ,x_{6} ,x_{7} \le 1.5\) , \(0.192 \le x_{8} ,x_{9} \le 0.345\) , \(- 30 \le x_{10} ,x_{10} \le 30\) .

The statistical results of LSA on this problem are presented in Table 18 . It can be seen from this table that the optimal solution of LSA is 22.842, which is consistent with the same results by FDB-AGDE, dFDB-MRFO, AFDB-SFS, and FDB-TLABC algorithms. The values corresponding to each variable are 0.5, 1.116, 0.5, 1.302, 0.5, 1.5, 0.5, 0.964338, 1.000, -19.577, and 3.73E-07. These results indicate that the LSA algorithm has strong competitiveness.

## 4.3 Application to real-world optimization of feature selection

In this subsection, the proposed LSA is applied to the feature selection problem to verify the performance of the algorithm in solving optimization problems in this domain. Feature selection refers to selecting a feature subset with good distinguishing characteristics from a feature set according to a target. Therefore, feature selection requires a specific feature evaluation function. The K-nearest neighbor (KNN) algorithm is a classification technique based on supervised machine learning. Because of its characteristics of easy implementation and fast operation, it is often selected as a wrapper-based feature selection method. The fitness function used in feature selection problems has two main goals: a small number of features and the minimum classification error. The most ideal solution to the feature selection problem is to achieve the minimum error by selecting the fewest features. In this paper, the following objective function is adopted to calculate the objective function:

where, \(acc\) represents the accuracy of KNN classification, \(N\) represents the total number of features, and \(N_{i}\) represents the number of features selected by the i-th candidate solution. \(\alpha \in [0,1]\) denotes the weight, \(\beta = 1 - \alpha\) . According to the literature (Xu et al. 2023 ), the values of \(\alpha\) and \(\beta\) are set to 0.99 and 0.01, respectively.

In the above tests, it is shown that the LSA is suitable for solving continuous optimization problems. To this end, it is necessary to convert the value of the search individual in the algorithm into a discrete value of 0 or 1. Denoting the j-th dimension value of the i th individual is \(x_{i,j}\) , the conversion is shown below:

where, \(rand \in (0,1)\) is a random number.

To verify the effect of the LSA in feature selection, this paper experimented with the LSA on 15 UCI public datasets. These 15 test cases have different sample numbers (from 72 to 846), and different feature numbers (from 8 to 30). Table 19 presents the basic information of these 15 datasets. The original dataset can be downloaded from the UCI machine learning website http://archive.ics.uci.edu/ml/index.php .

Five algorithms were selected for comparison, including BGWO (Emary et al. 2016 ), CCSA (Zamani et al. 2019 ), SCA (Mirjalili 2016 ), SSA (Xue and Shen 2020 ), and WOA (Hayyolalam and Kazem 2020 ). The population size was set to 30, each test case was run 20 times independently, and the average value was taken as the statistical value. All other parameters were kept the same as their corresponding references. Tables 20 , 21 , and 22 present the feature selection results of each algorithm on the UCI dataset.

Table 20 shows the average fitness value of each algorithm, and the optimal result is shown in bold. It can be seen from Table 20 that except for the four data sets of Breast, Fertility, WDBC, and Vehicle, LSA obtained the best average fitness value among all the other 11 feature selection algorithms. On these 4 datasets, the performance of the LSA is second only to BGWO. Although the feature selection capabilities of CCSA, SCA, SSA, and WOA are very powerful, their results are still significantly worse than those of the LSA. The rankings of these six algorithms in terms of fitness value are LSA, BGWO, WOA, SCA, CCSA, and SSA

Table 21 shows the classification error rate of the LSA and other comparison algorithms on each dataset. It can be seen from this table that the classification error rate of the LSA on Ceramic and Audit-2 is 0 datasets. In the 15 test datasets, LSA ranked first on 12 and ranked second on the other 3 datasets. This proves the absolute superiority of the LSA algorithm. The features of all algorithms on the BreastTissue and Vehicle datasets are difficult to distinguish because the error rate of all algorithms exceeds 25%.

Table 22 presents the average number of features selected by each algorithm. It was found that no algorithm obtained an absolute advantage in the number of features. The main reason is that the weight of the selected feature number in the fitness function is relatively small. As a result, although some algorithms select a few features, their classification accuracy is low. The above analysis indicates that the LSA has strong competitiveness in feature selection.

Figure 19 shows the convergence curves of each algorithm when solving the feature selection problem. It can be seen from Fig. 19 that the LSA has high convergence accuracy and speed when solving such problems.

The convergence curves of all algorithms for the feature section on 9 UCI datasets

## 5 Conclusion

This paper introduces a novel learning search algorithm (LSA) designed to efficiently and accurately address optimization problems. In the global expansion stage, the algorithm leverages historical knowledge and up-to-date community information to guide the search direction, thereby enhancing its global search capability. In the local development phase, the algorithm employs the teaching behavior and direction of the role model within the population to enhance the learning capability of the entire population. By dynamically adapting the control factor, the algorithm strikes a balance between exploration and exploitation, thereby avoiding local optima and improving convergence speed. Experimental results vividly demonstrate the LSA’s search process for the optimal solution. Initially, 40 CEC 2014 and CEC 2020 benchmark functions are subjected to comparative testing using well-known original algorithms and recently proposed high-performing improved algorithms. Statistical analysis and the Wilcoxon signed rank test substantiate the LSA’s commendable performance and robust competitiveness vis-à-vis other meta-heuristic algorithms. Furthermore, six subsequent engineering design experiments underscore the LSA’s efficacy in solving real-world engineering applications with constraints. Finally, the LSA is used to solve the feature selection problem, and the experimental results on 15 UCI datasets further verify that the proposed algorithm performs significantly better than other methods in terms of classification accuracy and fitness value.

In this study, despite utilizing the LSA algorithm for solving continuous single-objective optimization problems, real-world constrained optimization problems, and real-world optimization of feature selection, limited research has been conducted on solving multi-objective problems. Many practical decision-making problems involve multiple criteria. For example, resource scheduling problems in cloud computing encompass objectives such as minimizing completion time and cost, and maximizing profit. Therefore, in the near future, we intend to further develop and enhance the LSA algorithm to tackle multi-objective optimization problems. Additionally, we aim to incorporate discretization methods into the LSA algorithm to enable it to handle discrete optimization problems, such as resource scheduling problems.

In our future work, we can employ adaptive mechanisms to adjust the parameters and operations of algorithms, enabling them to automatically adapt and improve their performance to different problems. Additionally, we can combine or cooperate metaheuristic algorithms with other optimization algorithms, machine learning methods, etc., to enhance the performance and adaptability of the algorithms. Moreover, LSA can be expanded to solve different optimization problems in various domains, such as neural networks, gene feature selection, shop floor scheduling, big data applications, and more.

## Data availability

Data is provided within the manuscript.

Abd Elaziz M, Dahou A, Abualigah L, Yu L, Alshinwan M, Khasawneh AM, Lu S (2021) Advanced metaheuristic optimization techniques in applications of deep neural networks: a review. Neural Comput Appl 33(21):14079–14099

Article Google Scholar

Abdel-Basset M, El-Shahat D, Jameel M et al (2023) Exponential distribution optimizer (EDO): a novel math-inspired algorithm for global optimization and engineering problems. Artif Intell Rev:1–72

Abdesslem layeb (2023) TSALSHADE: improved LSHADE algorithm with tangent search. MATLAB central file exchange. https://www.mathworks.com/maTLABCentral/fileexchange/123400-tsalshade-improved-lshade-algorithm-with-tangent-search

Abdolrasol MGM, Hussain SMS, Ustun TS, Sarker MR, Hannan MA, Mohamed R, Ali JA, Mekhilef S, Milad A (2021) Artificial Neural networks based optimization techniques: a review. Electronics 10(21):2689

Abed-Alguni BH, Alawad NA, Barhoush M et al (2021) Exploratory cuckoo search for solving single-objective optimization problems[J]. Soft Comput 25(15):10167–10180

Abualigah L, Shehab M, Alshinwan M et al (2020) Salp swarm algorithm: a comprehensive survey. Neural Comput Appl 32(15):11195–11215

Abualigah L, Diabat A, Mirjalili S et al (2021) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609

Article MathSciNet Google Scholar

Agushaka JO, Ezugwu AE, Abualigah L (2022) Dwarf mongoose optimization algorithm. Comput Methods Appl Mech Eng 391:114570

Ahmadianfar I, Heidari AA, Noshadian S et al (2022) INFO: an efficient optimization algorithm based on weighted mean of vectors. Expert Syst Appl 195:116516

Ahmia I, Aider M (2019) A novel metaheuristic optimization algorithm: the monarchy metaheuristic. Turk J Electr Eng Comput Sci 27(1):362–376

Akbari E, Rahimnejad A, Gadsden SA (2021) A greedy non-hierarchical grey wolf optimizer for real-world optimization. Electron Lett 57(13):499–501

Ali MH, El-Rifaie AM, Youssef AAF et al (2023) Techno-economic strategy for the load dispatch and power flow in power grids using peafowl optimization algorithm. Energies 16(2):846

Alsattar HA, Zaidan AA, Zaidan BB (2020) Novel meta-heuristic bald eagle search optimisation algorithm. Artif Intell Rev 53(3):2237–2264

Aragón VS, Esquivel SC, Coello CAC (2010) A modified version of a T-cell algorithm for constrained optimization problems. Internat J Numer Methods Engrg 84(3):351–378

Arora S, Singh S (2019) Butterfly optimization algorithm: a novel approach for global optimization. Soft Comput 23:715–734

Askari Q, Saeed M, Younas I (2020) Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst Appl 161:113702

Askarzadeh A (2016) A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput Struct 169:1–12

Aydilek IB (2018) A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl Soft Comput 66:232–249

Ayyarao TSLV, Ramakrishna NSS, Elavarasan RM et al (2022) War strategy optimization algorithm: a new effective metaheuristic algorithm for global optimization. IEEE Access 10:25073–25105

Bennett S (2011) Learning behaviors and learning spaces. portal: libraries and the academy. 11(3):765–789

Bernardino HS, Barbosa HJC, Lemonge ACC, Fonseca LG (2008) A new hybrid AIS-GA for constrained optimization problems in mechanical engineering. In: 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence). pp 1455–1462

Bertsimas D, Tsitsiklis J (1993) Simulated annealing. Stat Sci 8(1):10–15

Braik MS (2021) Chameleon swarm algorithm: a bio-inspired optimizer for solving engineering design problems. Expert Syst Appl 174:114685

Braik M, Sheta A, Al-Hiary H (2021) A novel meta-heuristic search algorithm for solving optimization problems: capuchin search algorithm. Neural Comput Appl 33(7):2515–2547

Brajevic I (2015) Crossover-based artificial bee colony algorithm for constrained optimization problems. Neural Comput Appl 26(7):1587–1601

Brammya G, Praveena S, Ninu Preetha NS et al (2019) Deer hunting optimization algorithm: a new nature-inspired meta-heuristic paradigm. Comput J 2019:bxy133

Bruner JS (1971) “The process of education” revisited. Phi Delta Kappan 53(1):18–21

Google Scholar

Bruner JS (2009) The process of education. Harvard University Press

Carreon-Ortiz H, Valdez F (2022) A new mycorrhized tree optimization nature-inspired algorithm. Soft Comput 26:4797–4817

Chander A, Chatterjee A, Siarry P (2011) A new social and momentum component adaptive PSO algorithm for image segmentation. Expert Syst Appl 38(5):4998–5004

Chen J, Xu H, Wu J et al (2019) Deer crossing road detection with roadside LiDAR sensor. IEEE Access 7:65944–65954

Chen H, Heidari AA, Zhao X et al (2020a) Advanced orthogonal learning-driven multi-swarm sine cosine optimization: framework and case studies. Expert Syst Appl 144:113113

Chen H, Jiao S, Wang M, Heidari AA, Zhao X (2020b) Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts. J Clean Prod 244:118778

Chen P, Zhou S, Zhang Q et al (2022) A meta-inspired termite queen algorithm for global optimization and engineering design problems. Eng Appl Artif Intell 111:104805

Cheng S, Shi Y, Qin Q et al (2014) Population diversity maintenance in brain storm optimization algorithm[J]. J Artif Intell Soft Comput Res 4(2):83–97

Chickermane H, Gea HC (1996) Structural optimization using a new local approximation method. Internat J Numer Methods Engrg 39(5):829–846

Chopra N, Ansari MM (2022) Golden jackal optimization: a novel nature-inspired optimizer for engineering applications. Expert Syst Appl 198:116924

Chun S, Kim YT, Kim TH (2013) A diversity-enhanced constrained particle swarm optimizer for mixed integer-discrete-continuous engineering design problems. Adv Mech Eng 5:130750

Coello CAC, Cortés NC (2004) Hybridizing a genetic algorithm with an artificial immune system for global optimization. Eng Optim 36(5):607–634

Das S, Suganthan PN (2010) Differential evolution: a survey of the state-of-the-art. IEEE Trans Evol Comput 15(1):4–31

Dehghani M, Trojovský P (2023) Osprey optimization algorithm: a new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front Mech Eng 8:1126450

Dehghani M, Hubálovský Š, Trojovský P (2021) Northern goshawk optimization: a new swarm-based algorithm for solving optimization problems. Ieee Access 9:162059–162080

Dehghani M, Hubálovský Š, Trojovský P (2022a) Tasmanian devil optimization: a new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 10:19599–19620

Dehghani M, Montazeri Z, Trojovská E et al (2023) Coati optimization algorithm: a new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl-Based Syst 259:110011

Dehghani M, Trojovská E, Trojovský P (2022b) Driving training-based optimization: a new human-based metaheuristic algorithm for solving optimization problems. 4. https://doi.org/10.21203/rs.3.rs-1506972/v1

Dehkordi A, Sadiq A, Mirjalili S et al (2021) Nonlinear-based chaotic harris hawks optimizer: algorithm and internet of vehicles application. Appl Soft Comput 109:107574

Dhanya D, Arivudainambi D (2019) Dolphin partner optimization based secure and qualified virtual machine for resource allocation with streamline security analysis. Peer- Peer Netw Appl 12(5):1194–1213

Dhiman G, Kaur A (2019) STOA: a bio-inspired based optimization algorithm for industrial engineering problems. Eng Appl Artif Intell 82:148–174

Dhiman G, Kumar V (2017) Spotted hyena optimizer: a novel bio-inspired based metaheuristic technique for engineering applications. Adv Eng Softw 114:48–70

Dhiman G, Kumar V (2019) Seagull optimization algorithm: theory and its applications for large-scale industrial engineering problems. Knowl-Based Syst 165:169–196

Dhrubajyoti G, Ananda Ra DR, Shibendu Shekhar R (2021) A partition cumunification based genetic-firefly algorithm for single objective optimization. Sādhanā 46(3):1–31

MathSciNet Google Scholar

Dorian Sidea (2024) Improved salp swarm algorithm. MATLAB Central File Exchange. https://www.mathworks.com/matlabcentral/fileexchange/155984-improved-salp-swarm-algorithm

Dos Santos Coelho L (2010) Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst Appl 37(2):1676–1683

Duman S, Kahraman HT, Sonmez Y, Guvenc U, Kati M, Aras S (2022) A powerful meta-heuristic search algorithm for solving global optimization and real-world solar photovoltaic parameter estimation problems. Eng Appl Artif Intell 111:104763

Duman S, Kahraman HT, Kati M (2023) Economical operation of modern power grids incorporating uncertainties of renewable energy sources and load demand using the adaptive fitness-distance balance-based stochastic fractal search algorithm. Eng Appl Artif Intell 117:105501

Emami H (2022) Stock exchange trading optimization algorithm: a human-inspired method for global optimization. J Supercomput 78(2):2125–2174

Emary E, Zawbaa HM, Hassanien AE (2016) Binary grey wolf optimization approaches for feature selection. Neurocomputing 172:371–381

Ewees AA, Al-qaness MAA, Abualigah L (2022) HBO-LSTM: optimized long short term memory with heap-based optimizer for wind power forecasting. Energy Convers Manage 268:116022

Ezugwu AE (2022) Advanced discrete firefly algorithm with adaptive mutation-based neighborhood search for scheduling unrelated parallel machines with sequence-dependent setup times. Int J Intell Syst 37(8):4612–4653

Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2020) Equilibrium optimizer: a novel optimization algorithm. Knowl-Based Syst 191:105190

Farshad R, Hamid RS, Mohamed AE, Shaker H, Ali E, Mohammed A, Tamer A (2022) An enhanced grey wolf optimizer with a velocity-aided global search mechanism. Mathematics 10(3):351

Feng Z, Niu W, Liu S (2021) Cooperation search algorithm: a novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Appl Soft Comput 98:106734

Forrest S (1993) Genetic algorithms: principles of natural selection applied to computation. Science 261(5123):872–878

Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29(1):17–35

Ghafil HN, Jármai K (2020) Dynamic differential annealed optimization: new metaheuristic optimization algorithm for engineering applications. Appl Soft Comput 93:106392

Ghasemi M, Kadkhoda Mohammadi S, Zare M et al (2022) A new firefly algorithm with improved global exploration and convergence with application to engineering optimization. Decis Anal J 5:100125

Gurrola-Ramos J, Hernàndez-Aguirre A, Dalmau-Cedeño O (2020) COLSHADE for real-world single-objective constrained optimization problems. 2020 IEEE Congress on Evolutionary Computation (CEC) 2020, 1–8

Guvenc U, Duman S, Kahraman HT, Aras S, Katı M (2021) Fitness-distance balance based adaptive guided differential evolution algorithm for security-constrained optimal power flow problem incorporating renewable energy sources. Appl Soft Comput 108:107421

Hashim FA, Hussien AG (2022) Snake optimizer: a novel meta-heuristic optimization algorithm. Knowl-Based Syst 242:108320

Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S (2019) Henry gas solubility optimization: a novel physics-based algorithm. Futur Gener Comput Syst 101:646–667

Hayyolalam V, Kazem AAP (2020) Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Eng Appl Artif Intell 87:103249

Hellwig M, Beyer H (2018) A matrix adaptation evolution strategy for constrained real-parameter optimization. In: 2018 IEEE Congress on Evolutionary Computation (CEC), 2018: 1–8

Holland JH (1992) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, pp 211

Huang J, Gao L, Li X (2015) An effective teaching-learning-based cuckoo search algorithm for parameter optimization problems in structure designing and machining processes. Appl Soft Comput 36:349–356

Hussain K, Salleh M, Cheng S et al (2019) On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput Appl 31:7665–7683

Ibrahim I, Hossain M, Duck B, Nadarajah M (2020) An improved wind driven optimization algorithm for parameters identification of a triple-diode photovoltaic cell model. Energy Convers Manag 213:112872

Jain M, Singh V, Rani A (2019) A novel nature-inspired algorithm for optimization: squirrel search algorithm. Swarm Evol Comput 44:148–175

James JQ, Li VOK (2015) A social spider algorithm for global optimization. Appl Soft Comput 30:614–627

Jian X, Zhu Y (2021) Parameters identification of photovoltaic models using modified Rao-1 optimization algorithm. Optik 231:166439

Kahraman H, Bakir H, Duman S, Katı M, Aras S, Guvenc U (2022) Dynamic FDB selection method and its application: modeling and optimizing of directional overcurrent relays coordination. Appl Intell 52:4873–4908

Kahraman HT, Katı M, Aras S, Taşci D (2023) Development of the Natural Survivor Method (NSM) for designing an updating mechanism in metaheuristic search algorithms. Eng Appl Artif Intell 122:106121

Kaidi W, Khishe M, Mohammadi M (2022) Dynamic levy flight chimp optimization. Knowl-Based Syst 235:107625

Kallioras NA, Lagaros ND, Avtzis DN (2018) Pity beetle algorithm – a new metaheuristic inspired by the behavior of bark beetles. Adv Eng Softw 121:147–166

Karami H, Anaraki MV, Farzin S, Mirjalili S (2021) Flow direction algorithm (fda): a novel optimization approach for solving optimization problems. Comput Ind Eng 156:107224

Kaur S, Awasthi LK, Sangal AL et al (2020) Tunicate Swarm Algorithm: a new bio-inspired based metaheuristic paradigm for global optimization. Eng Appl Artif Intell 90:103541

Kaveh A, Bakhshpoori T (2016) Water evaporation optimization: a novel physically inspired optimization algorithm. Comput Struct 167:69–85

Kaveh A, Dadras A (2017) A novel meta-heuristic optimization algorithm: thermal exchange optimization. Adv Eng Softw 110:69–84

Kaveh A, Talatahari S (2010) A novel heuristic optimization method: charged system search. Acta Mech 213(3):267–289

Kaveh A, Seddighian MR, Ghanadpour E (2020a) Black hole mechanics optimization: a novel meta-heuristic algorithm. Asian Journal of Civil Engineering 21(7):1129–1149

Kaveh A, Akbari H, Hosseini SM (2020b) Plasma generation optimization: a new physically-based metaheuristic algorithm for solving constrained optimization problems. Eng Comput 38(4):1554–1606

Khalilpourazari S, Doulabi HH, Çiftçioğlu AÖ et al (2021) Gradient-based grey wolf optimizer with Gaussian walk: application in modelling and prediction of the COVID-19 pandemic. Expert Syst Appl 177:114920

Khelil K, Nicolas Z, Naoufel C, Samir BB (2022) Exponential particle swarm optimization for global optimization. IEEE Access 10:78320–78344

Kılkış Ş, Kılkış B (2019) An urbanization algorithm for districts with minimized emissions based on urban planning and embodied energy towards net-zero exergy targets. Energy 179:392–406

Kumar DS, Zelinka I (2020) A self-adaptive spherical search algorithm for real-world constrained optimization problems. Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, 2020:13–14

Kumar A, Das S, Zelinka I (2020) A modified covariance matrix adaptation evolution strategy for real-world constrained optimization problems. Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, 2020:11–12

Kundu T, Garg H (2022) A hybrid TLNNABC algorithm for reliability optimization and engineering design problems. Eng Comp 38(6):5251–5295

Li C, Yang S, Nguyen TT (2011) A self-learning particle swarm optimizer for global optimization problems. IEEE Trans Syst Man Cybernetics B Cybern 42(3):627–646

Li L, Chang YB, Tseng ML et al (2020a) Wind power prediction using a novel model on wavelet decomposition-support vector machines-improved atomic search algorithm. J Clean Prod 270:121817

Li S, Chen H, Wang M et al (2020c) Slime mould algorithm: a new method for stochastic optimization. Futur Gener Comput Syst 111:300–323

Li Z, Liang Y, Liao Q, Zhang H (2021) A review of multiproduct pipeline scheduling: from bibliometric analysis to research framework and future research directions. J Pipeline Sci Eng 1(4):395–406

Li Z, Tam V, Yeung LK (2020b) A study on parameter sensitivity analysis of the virus spread optimization. 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 1535–1542

Liang J, Qiao K, Yu K, Ge S, Qu B, Xu R, Li K (2020) Parameters estimation of solar photovoltaic models via a self-adaptive ensemble-based differential evolution. Sol Energy 207:336–346

Lin X, Yu X, Li W (2022) A heuristic whale optimization algorithm with niching strategy for global multi-dimensional engineering optimization. Comput Ind Eng 171:108361

Liu H, Gu F, Zhang Q (2013) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455

Long W, Wu T, Tang M, Xu M, Cai S (2020) Grey wolf optimizer algorithm based on lens imaging learning strategy. Acta Automat Sin 46(10):2148–2164

Mandal M, Mukhopadhyay A (2015) A novel PSO-based graph-theoretic approach for identifying most relevant and non-redundant gene markers from gene expression data. Int J Parallel Emergent Distrib Syst 30(3):175–192

Martello S, Pulleyblank WR, Toth, de Werra D (1984) Balanced optimization problems. Oper Res Lett 3(5):275–278

Masadeh R, Mahafzah BA, Sharieh A (2019) Sea lion optimization algorithm. Int J Adv Comput Sci Appl 10(5):388–395

McFarland D, Bösser T, Bosser T (1993) Intelligent behavior in animals and robots. Mit Press

Meng XB, Li HX, Gao XZ (2019) An adaptive reinforcement learning-based bat algorithm for structural design problems. Int J Bio-Inspired Comput 14(2):114–124

Meng OK, Pauline O, Kiong SC (2021) A carnivorous plant algorithm for solving global optimization problems. Appl Soft Comput J 98:106833

Mirjalili S (2015a) The ant lion optimizer. Adv Eng Softw 83:80–98

Mirjalili S (2015b) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249

Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133

Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

Mirjalili S, Gandomi AH, Mirjalili SZ, Shahrzad S, Hossam F, Seyed M (2017) Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191

Mirjalili S (2019) Genetic algorithm. Evolutionary algorithms and neural networks. Springer, Cham, pp 43–55

Moghaddam FF, Moghaddam RF (2012) Cheriet M. Curved space optimization: a random search based on general relativity theory. arXiv preprint arXiv:1208.2214

Mohammadi-Balani A, Nayeri MD, Azar A, Taghizadeh-Yazdi M (2021) Golden eagle optimizer: a nature-inspired metaheuristic algorithm. Comput Ind Eng 152:107050

Mohammed H, Rashid T (2020) A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design. Neural Comput Appl 32:14701–14718

Montemurro M, Vincenti A, Vannucci P (2013) The automatic dynamic penalization method (ADP) for handling constraints with genetic algorithms. Comput Methods Appl Mech Engrg 256:70–87

Moscato P, Cotta C, Mendes A (2004) Memetic algorithms. New optimization techniques in engineering. Springer, Berlin, Heidelberg, pp 53–85

Naik MK, Panda R, Abraham A (2021) Adaptive opposition slime mould algorithm. Soft Comput 25:14297–14313

Naruei I, Keynia F (2022) Wild horse optimizer: a new meta-heuristic algorithm for solving engineering optimization problems. Eng with Comput 38(4):3025–3056

Naruei I, Keynia F, Sabbagh Molahosseini A (2022) Hunter–prey optimization: algorithm and applications. Soft Comput 26(3):1279–1314

Nematollahi AF, Rahiminejad A, Vahidi B (2017) A novel physical based meta-heuristic optimization method known as lightning attachment procedure optimization. Appl Soft Comput 59:596–621

Nematollahi AF, Rahiminejad A, Vahidi B (2020) A novel meta-heuristic optimization method based on golden ratio in nature. Soft Comput 24(2):1117–1151

Onay FK, Aydemı̇r SB (2022) Chaotic hunger games search optimization algorithm for global optimization and engineering problems. Math Comput Simul 192:514–536

Örnek BN, Aydemir SB, Düzenli T et al (2022) A novel version of slime mould algorithm for global optimization and real world engineering problems: enhanced slime mould algorithm. Math Comput Simul 198:253–288

Pan H, Wang L, Liu B (2006) Particle swarm optimization for function optimization in noisy environment. Appl Math Comput 181(2):908–919

Pan JS, Sun B, Chu SC et al (2023) A parallel compact gannet optimization algorithm for solving engineering optimization problems. Mathematics 11(2):439

Parsopoulos E, Vrahatis MN (2002) Recent approaches to global optimization problems through particle swarm optimization. Nat Comput 1(2):235–306

Peraza-Vázquez H, Peña-Delgado AF, Echavarría-Castillo G et al (2021) A bio-inspired method for engineering design optimization inspired by dingoes hunting strategies. Math Probl Eng 2021:9107547

Pham TX, Siarry P, Oulhadj H (2018) Integrating fuzzy entropy clustering with an improved PSO for MRI brain image segmentation. Appl Soft Comput 65:230–242

Połap D, Woźniak M (2021) Red fox optimization algorithm. Expert Syst Appl 166:114107

Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization. Swarm Intell 1(1):33–57

Qiao W, Lu H, Zhou G et al (2020) A hybrid algorithm for carbon dioxide emissions forecasting based on improved lion swarm optimizer. J Clean Prod 244:118612

Rachdi E, El Merabet Y, Akhtar Z et al (2020) Directional neighborhood topologies based multi-scale quinary pattern for texture classification. IEEE Access 8:212233–212246

Ramshanker A, Chakraborty S (2022) Maiden application of skill optimization algorithm on cascaded multi-level neuro-fuzzy based power system stabilizers for damping oscillations. Int J Renew Energy Res (IJRER) 12(4):2152–2167

Rao R, Savsani V, Vakharia D (2011) Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315

Rao RV, Savsani VJ, Balic J (2012) Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems. Eng Optim 44(12):1447–1462

Rejowski J, Pinto JM (2003) Scheduling of a multiproduct pipeline system. Comput Chem Eng 27(8):1229–1246

Ruan Y, Li KY, Zheng R et al (2022) Cholinergic neurons in the pedunculopontine nucleus guide reversal learning by signaling the changing reward contingency. Cell Rep 38(9):110437

Salimi H (2015) Stochastic fractal search. a powerful metaheuristic algorithm. Knowl-Based Syst 75:1–18

Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47

Schoenewolf G (1990) Emotional contagion: behavioral induction in individuals and groups. Mod Psychoanal 15(1):49–61

Schwefel HP, Rudolph G (1995) Contemporary evolution strategies. European conference on artificial life. Springer, Berlin, Heidelberg, pp 891–907

Seo JH, Im CH, Heo CG (2006) Multimodal function optimization based on particle swarm optimization. IEEE Trans Magn 42(4):1095–1098

Sette S, Boullart L (2001) Genetic programming: principles and applications. Eng Appl Artif Intell 14(6):727–736

Seyyedabbasi A, Kiani F (2023) Sand Cat swarm optimization: a nature-inspired algorithm to solve global optimization problems. Eng Comput 39(4):2627–2651

Sharma A, Shoval S, Sharma A, Jitendra KP (2022) Path planning for multiple targets interception by the swarm of UAVs based on swarm intelligence algorithms: a review. IETE Tech Rev 39(3):675–697

Shi XH, Liang YC, Lee HP et al (2005) An improved GA and a novel PSO-GA-based hybrid algorithm. Inf Process Lett 93(5):255–261

Shitu S, Jagdish CB (2022) Mutation-driven grey wolf optimizer with modified search mechanism. Expert Syst Appl 194:116450

Shubham G, Kusum D (2020) A memory-based grey wolf optimizer for global optimization tasks. Appl Soft Comput 93:106367

Solis F, Wets J (1981) Minimization by random search techniques. Math Oper Res 6(1):19–30

Song S, Wang P, Heidari AA, Wang M, Zhao X, Chen H, He W, Xu S (2021) Dimension decided Harris Hawks optimization with Gaussian mutation: balance analysis and diversity patterns. Knowl-Based Syst 215:106425

Sun X, Croke B, Roberts S et al (2021) Comparing methods of randomizing Sobol′ sequences for improving uncertainty of metrics in variance-based global sensitivity estimation. Reliab Eng Syst Saf 210:107499

Tian G, Fathollahi-Fard AM, Ren Y, Li Z, Jiang X (2022) Multi-objective scheduling of priority-based rescue vehicles to extinguish forest fires using a multi-objective discrete gravitational search algorithm. Inf Sci 608:578–596

Trivedi A, Srinivasan D, Biswas N (2018) An improved unified differential evolution algorithm for constrained optimization problems. 2018 IEEE Congress on Evolutionary Computation (CEC), 2018:1–10

Trojovská E, Dehghani M (2022) A new human-based metahurestic optimization method based on mimicking cooking training. Sci Rep 12(1):14861

Trojovská E, Dehghani M, Trojovský P (2022) Zebra optimization algorithm: a new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 10:49445–49473

Trojovský P, Dehghani M (2022) Pelican optimization algorithm: a novel nature-inspired algorithm for engineering applications. Sensors 22(3):855

Trojovský P, Dehghani M (2023) Subtraction-average-based optimizer: a new swarm-inspired metaheuristic algorithm for solving optimization problems. Biomimetics 8(2):149

Tutueva AV, Nepomuceno EG, Karimov AI et al (2020) Adaptive chaotic maps and their application to pseudo-random numbers generation. Chaos Solitons Fractals 133:109615

Wang GG, Deb S, Cui Z (2019) Monarch butterfly optimization. Neural Comput Appl 31:1995–2014

Wang L, Cao Q, Zhang Z et al (2022a) Artificial rabbits optimization: a new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng Appl Artif Intell 114:105082

Wang Z, Pan J, Huang K et al (2022b) Hybrid gray wolf optimization and cuckoo search algorithm based on the taguchi theory. Advances in intelligent information hiding and multimedia signal processing. Springer, Singapore, pp 219–228

Wilson AJ, Pallavi DR, Ramachandran M (2022) A review on memetic algorithms and its developments. Electrical Automation Eng 1(1):7–12

Xu Z, Heidari AA, Kuang F et al (2023) Enhanced Gaussian bare-bones grasshopper optimization: mitigating the performance concerns for feature selection. Expert Syst Appl 212:118642

Xue J, Shen B (2020) A novel swarm intelligence optimization approach: sparrow search algorithm. Syst Sci Control Eng 8(1):22–34

Xue J, Shen B (2023) Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. J Supercomput 79(7):7305–7336

Yaghini M, Khoshraftar MM, Fallahi M (2013) A hybrid algorithm for artificial neural network training. Eng Appl Artif Intell 26(1):293–301

Yang XS, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24(1):169–174

Yapici H, Cetinkaya N (2019) A new meta-heuristic optimizer: pathfinder algorithm. Appl Soft Comput 78:545–568

Yazdani M, Jolai F (2016) Lion Optimization Algorithm (LOA): a nature-inspired metaheuristic algorithm. J Comput Des Eng 3(1):24–36

Yu K, Qu B, Yue C, Ge S, Chen X, Liang J (2019) A performance-guided JAYA algorithm for parameters identification of photovoltaic cell and module. Apply Energy 237:241–257

Yu H, Gao Y, Wang L et al (2020) A hybrid particle swarm optimization algorithm enhanced with nonlinear inertial weight and Gaussian mutation for job shop scheduling problems. Mathematics 8(8):1355

Zamani H, Nadimi-Shahraki MH, Gandomi AH (2019) CCSA: conscious neighborhood-based crow search algorithm for solving global optimization problems. Appl Soft Comput 85:105583

Zervoudakis K, Tsafarakis S (2020) A mayfly optimization algorithm. Comput Ind Eng 145:106559

Zhang Y, Jin Z (2022) Comprehensive learning Jaya algorithm for engineering design optimization problems. J Intell Manuf 33(5):1229–1253

Zhang J, Xiao M, Gao L, Pan Q (2018) Queuing search algorithm: a novel metaheuristic algorithm for solving engineering optimization problems. Appl Math Model 63:464–490

Zhang M, Long D, Qin T et al (2020) A chaotic hybrid butterfly optimization algorithm with particle swarm optimization for high-dimensional optimization problems. Symmetry 12(11):1800

Zhang Q, Li H, Liu Y et al (2021a) A new quantum particle swarm optimization algorithm for controller placement problem in software-defined networking. Comput Electr Eng 95:107456

Zhang Y, Chi A, Mirjalili S (2021b) Enhanced Jaya algorithm: a simple but efficient optimization method for constrained engineering design problems. Knowl-Based Syst 233:107555

Zhang H, Liu T, Ye X, Heidari AA, Liang G, Chen H, Pan Z (2022) Differential evolution-assisted salp swarm algorithm with chaotic structure for real-world problems. Eng Comput:1–35

Zhao W, Wang L, Zhang Z (2019) Supply-demand-based optimization: a novel economics-inspired algorithm for global optimization. IEEE Access 7:73182–73206

Zhong C, Li G, Meng Z (2022) Beluga whale optimization: a novel nature-inspired metaheuristic algorithm. Knowl-Based Syst 251:109215

Zitouni F, Harous S, Maamri R (2020) The solar system algorithm: a novel metaheuristic method for global optimization. IEEE Access 9:4542–4565

Zitouni F, Harous S, Belkeram A, Hammou LEB (2021) The Archerfish hunting optimizer: a novel metaheuristic algorithm for global optimization. Arab J Sci Eng. https://doi.org/10.1007/s13369-021-06208-z

Download references

## Acknowledgements

This study was jointly supported by the National Natural Science Foundation of China (Grant Number 62341210), Science and Technology Development Plan for Baise City (Grant Number 20233654).

## Author information

Authors and affiliations.

Public Health and Management Institute, Youjiang Medical University for Nationalities, Baise, 533000, China

Department of Pathology and Pathophysiology, Hunan Normal University School of Medicine, Hunan Normal University, Changsha, 410081, China

Xiaoning Peng

School of Civil Engineering, Chongqing Jiaotong University, Chongqing, 400074, China

You can also search for this author in PubMed Google Scholar

## Contributions

Chiwen Qu: Writing - original draft, Visualization, Formal analysis, Validation. Xiaoning Peng: Supervision, Project administration. Qilan Zeng: Project Administration, Writing - Review & Editing, Supervision.

## Corresponding author

Correspondence to Qilan Zeng .

## Ethics declarations

Competing interests.

The authors declare no competing interests.

## Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Tables 23 , 24 , 25 , 26 , 27 , 28 , 29 , and 30

## Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

## About this article

Qu, C., Peng, X. & Zeng, Q. Learning search algorithm: framework and comprehensive performance for solving optimization problems. Artif Intell Rev 57 , 139 (2024). https://doi.org/10.1007/s10462-024-10767-6

Download citation

Accepted : 18 April 2024

Published : 09 May 2024

DOI : https://doi.org/10.1007/s10462-024-10767-6

## Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

- Learning search algorithm
- Swarm intelligence
- Optimization
- Feature selection
- Find a journal
- Publish with us
- Track your research

Help | Advanced Search

## Computer Science > Data Structures and Algorithms

Title: fine-grained analysis and faster algorithms for iteratively solving linear systems.

Abstract: While effective in practice, iterative methods for solving large systems of linear equations can be significantly affected by problem-dependent condition number quantities. This makes characterizing their time complexity challenging, particularly when we wish to make comparisons between deterministic and stochastic methods, that may or may not rely on preconditioning and/or fast matrix multiplication. In this work, we consider a fine-grained notion of complexity for iterative linear solvers which we call the spectral tail condition number, $\kappa_\ell$, defined as the ratio between the $\ell$th largest and the smallest singular value of the matrix representing the system. Concretely, we prove the following main algorithmic result: Given an $n\times n$ matrix $A$ and a vector $b$, we can find $\tilde{x}$ such that $\|A\tilde{x}-b\|\leq\epsilon\|b\|$ in time $\tilde{O}(\kappa_\ell\cdot n^2\log 1/\epsilon)$ for any $\ell = O(n^{\frac1{\omega-1}})=O(n^{0.729})$, where $\omega \approx 2.372$ is the current fast matrix multiplication exponent. This guarantee is achieved by Sketch-and-Project with Nesterov's acceleration. Some of the implications of our result, and of the use of $\kappa_\ell$, include direct improvement over a fine-grained analysis of the Conjugate Gradient method, suggesting a stronger separation between deterministic and stochastic iterative solvers; and relating the complexity of iterative solvers to the ongoing algorithmic advances in fast matrix multiplication, since the bound on $\ell$ improves with $\omega$. Our main technical contributions are new sharp characterizations for the first and second moments of the random projection matrix that commonly arises in sketching algorithms, building on a combination of techniques from combinatorial sampling via determinantal point processes and Gaussian universality results from random matrix theory.

## Submission history

Access paper:.

- Other Formats

## References & Citations

- Google Scholar
- Semantic Scholar

## BibTeX formatted citation

## Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

- Institution

## arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

## Novel quantum algorithm proposed for high-quality solutions to combinatorial optimization problems

C ombinatorial optimization problems (COPs) have applications in many different fields such as logistics, supply chain management, machine learning, material design and drug discovery, among others, for finding the optimal solution to complex problems. These problems are usually very computationally intensive using classical computers and thus solving COPs using quantum computers has attracted significant attention from both academia and industry.

Quantum computers take advantage of the quantum property of superposition, using specialized qubits, that can exist in an infinite yet contained number of states of 0 or 1 or any combination of the two, to quickly solve large problems. However, when COPs involve constraints, conventional quantum algorithms like adiabatic quantum annealing struggle to obtain a near-optimal solution within the operation time of quantum computers.

Recent advances in quantum technology have led to devices such as quantum annealers and gate-type quantum devices that provide suitable platforms for solving COPs. Unfortunately, they are susceptible to noise, which limits their applicability to quantum algorithms with low computational costs.

To address this challenge, Assistant Professor Tatsuhiko Shirai and Professor Nozomu Togawa from the Department of Computer Science and Communications Engineering at Waseda University in Japan have recently developed a post-processing variationally scheduled quantum algorithm (pVSQA). Their study was published in the journal IEEE Transactions on Quantum Engineering .

"The two main methods for solving COPs with quantum devices are variational scheduling and post-processing. Our algorithm combines variational scheduling with a post-processing method that transforms infeasible solutions into feasible ones, allowing us to achieve near-optimal solutions for constrained COPs on both quantum annealers and gate-based quantum computers," explains Dr. Shirai.

The innovative pVSQA algorithm uses a quantum device to first generate a variational quantum state via quantum computation. This is then used to generate a probability distribution function which consists of all the feasible and infeasible solutions that are within the constraints of the COP.

Next, the post-processing method transforms the infeasible solutions into feasible ones, leaving the probability distribution with only feasible solutions. A classical computer is then used to calculate an energy expectation value of the cost function using this new probability distribution. Repeating this calculation results in a near-optimal solution.

The researchers analyzed the performance of this algorithm using both a simulator and real quantum devices such as a quantum annealer and a gate-type quantum device. The experiments revealed that pVSQA achieves a near-optimal performance within a predetermined time on the simulator and outperforms conventional quantum algorithms without post-processing on real quantum devices.

Dr. Shirai said, "Drastic social transformations are urgently needed to address various social issues. Examples include the realization of a carbon-neutral society to solve climate change issues and the realization of sustainable development goals to address issues such as increased energy demand and food shortage.

"Efficiently solving combinatorial optimization problems is at the heart of achieving these transformations. Our new method will play a significant role in realizing these long-term social transformations."

In conclusion, this study marks a significant step forward for using quantum computers for solving COPs, holding promise for addressing complex real-world problems across various domains.

More information: Tatsuhiko Shirai et al, Post-Processing Variationally Scheduled Quantum Algorithm for Constrained Combinatorial Optimization Problems, IEEE Transactions on Quantum Engineering (2024). DOI: 10.1109/TQE.2024.3376721

Provided by Waseda University

Watch CBS News

## Teens come up with trigonometry proof for Pythagorean Theorem, a problem that stumped math world for centuries

By Bill Whitaker

May 5, 2024 / 7:00 PM EDT / CBS News

As the school year ends, many students will be only too happy to see math classes in their rearview mirrors. It may seem to some of us non-mathematicians that geometry and trigonometry were created by the Greeks as a form of torture, so imagine our amazement when we heard two high school seniors had proved a mathematical puzzle that was thought to be impossible for 2,000 years.

We met Calcea Johnson and Ne'Kiya Jackson at their all-girls Catholic high school in New Orleans. We expected to find two mathematical prodigies.

Instead, we found at St. Mary's Academy , all students are told their possibilities are boundless.

Come Mardi Gras season, New Orleans is alive with colorful parades, replete with floats, and beads, and high school marching bands.

In a city where uniqueness is celebrated, St. Mary's stands out – with young African American women playing trombones and tubas, twirling batons and dancing - doing it all, which defines St. Mary's, students told us.

Junior Christina Blazio says the school instills in them they have the ability to accomplish anything.

Christina Blazio: That is kinda a standard here. So we aim very high - like, our aim is excellence for all students.

The private Catholic elementary and high school sits behind the Sisters of the Holy Family Convent in New Orleans East. The academy was started by an African American nun for young Black women just after the Civil War. The church still supports the school with the help of alumni.

In December 2022, seniors Ne'Kiya Jackson and Calcea Johnson were working on a school-wide math contest that came with a cash prize.

Ne'Kiya Jackson: I was motivated because there was a monetary incentive.

Calcea Johnson: 'Cause I was like, "$500 is a lot of money. So I-- I would like to at least try."

Both were staring down the thorny bonus question.

Bill Whitaker: So tell me, what was this bonus question?

Calcea Johnson: It was to create a new proof of the Pythagorean Theorem. And it kind of gave you a few guidelines on how would you start a proof.

The seniors were familiar with the Pythagorean Theorem, a fundamental principle of geometry. You may remember it from high school: a² + b² = c². In plain English, when you know the length of two sides of a right triangle, you can figure out the length of the third.

Both had studied geometry and some trigonometry, and both told us math was not easy. What no one told them was there had been more than 300 documented proofs of the Pythagorean Theorem using algebra and geometry, but for 2,000 years a proof using trigonometry was thought to be impossible, … and that was the bonus question facing them.

Bill Whitaker: When you looked at the question did you think, "Boy, this is hard"?

Ne'Kiya Jackson: Yeah.

Bill Whitaker: What motivated you to say, "Well, I'm going to try this"?

Calcea Johnson: I think I was like, "I started something. I need to finish it."

Bill Whitaker: So you just kept on going.

Calcea Johnson: Yeah.

For two months that winter, they spent almost all their free time working on the proof.

CeCe Johnson: She was like, "Mom, this is a little bit too much."

CeCe and Cal Johnson are Calcea's parents.

CeCe Johnson: So then I started looking at what she really was doing. And it was pages and pages and pages of, like, over 20 or 30 pages for this one problem.

Cal Johnson: Yeah, the garbage can was full of papers, which she would, you know, work out the problems and-- if that didn't work she would ball it up, throw it in the trash.

Bill Whitaker: Did you look at the problem?

Neliska Jackson is Ne'Kiya's mother.

Neliska Jackson: Personally I did not. 'Cause most of the time I don't understand what she's doing (laughter).

Michelle Blouin Williams: What if we did this, what if I write this? Does this help? ax² plus ….

Their math teacher, Michelle Blouin Williams, initiated the math contest.

Bill Whitaker: And did you think anyone would solve it?

Michelle Blouin Williams: Well, I wasn't necessarily looking for a solve. So, no, I didn't—

Bill Whitaker: What were you looking for?

Michelle Blouin Williams: I was just looking for some ingenuity, you know—

Calcea and Ne'Kiya delivered on that! They tried to explain their groundbreaking work to 60 Minutes. Calcea's proof is appropriately titled the Waffle Cone.

Calcea Johnson: So to start the proof, we start with just a regular right triangle where the angle in the corner is 90°. And the two angles are alpha and beta.

Bill Whitaker: Uh-huh

Calcea Johnson: So then what we do next is we draw a second congruent, which means they're equal in size. But then we start creating similar but smaller right triangles going in a pattern like this. And then it continues for infinity. And eventually it creates this larger waffle cone shape.

Calcea Johnson: Am I going a little too—

Bill Whitaker: You've been beyond me since the beginning. (laughter)

Bill Whitaker: So how did you figure out the proof?

Ne'Kiya Jackson: Okay. So you have a right triangle, 90° angle, alpha and beta.

Bill Whitaker: Then what did you do?

Ne'Kiya Jackson: Okay, I have a right triangle inside of the circle. And I have a perpendicular bisector at OP to divide the triangle to make that small right triangle. And that's basically what I used for the proof. That's the proof.

Bill Whitaker: That's what I call amazing.

Ne'Kiya Jackson: Well, thank you.

There had been one other documented proof of the theorem using trigonometry by mathematician Jason Zimba in 2009 – one in 2,000 years. Now it seems Ne'Kiya and Calcea have joined perhaps the most exclusive club in mathematics.

Bill Whitaker: So you both independently came up with proof that only used trigonometry.

Ne'Kiya Jackson: Yes.

Bill Whitaker: So are you math geniuses?

Calcea Johnson: I think that's a stretch.

Bill Whitaker: If not genius, you're really smart at math.

Ne'Kiya Jackson: Not at all. (laugh)

To document Calcea and Ne'Kiya's work, math teachers at St. Mary's submitted their proofs to an American Mathematical Society conference in Atlanta in March 2023.

Ne'Kiya Jackson: Well, our teacher approached us and was like, "Hey, you might be able to actually present this," I was like, "Are you joking?" But she wasn't. So we went. I got up there. We presented and it went well, and it blew up.

Bill Whitaker: It blew up.

Calcea Johnson: Yeah.

Ne'Kiya Jackson: It blew up.

Bill Whitaker: Yeah. What was the blowup like?

Calcea Johnson: Insane, unexpected, crazy, honestly.

It took millenia to prove, but just a minute for word of their accomplishment to go around the world. They got a write-up in South Korea and a shout-out from former first lady Michelle Obama, a commendation from the governor and keys to the city of New Orleans.

Bill Whitaker: Why do you think so many people found what you did to be so impressive?

Ne'Kiya Jackson: Probably because we're African American, one. And we're also women. So I think-- oh, and our age. Of course our ages probably played a big part.

Bill Whitaker: So you think people were surprised that young African American women, could do such a thing?

Calcea Johnson: Yeah, definitely.

Ne'Kiya Jackson: I'd like to actually be celebrated for what it is. Like, it's a great mathematical achievement.

Achievement, that's a word you hear often around St. Mary's academy. Calcea and Ne'Kiya follow a long line of barrier-breaking graduates.

The late queen of Creole cooking, Leah Chase , was an alum. so was the first African-American female New Orleans police chief, Michelle Woodfork …

And judge for the Fifth Circuit Court of Appeals, Dana Douglas. Math teacher Michelle Blouin Williams told us Calcea and Ne'Kiya are typical St. Mary's students.

Bill Whitaker: They're not unicorns.

Michelle Blouin Williams: Oh, no no. If they are unicorns, then every single lady that has matriculated through this school is a beautiful, Black unicorn.

Pamela Rogers: You're good?

Pamela Rogers, St. Mary's president and interim principal, told us the students hear that message from the moment they walk in the door.

Pamela Rogers: We believe all students can succeed, all students can learn. It does not matter the environment that you live in.

Bill Whitaker: So when word went out that two of your students had solved this almost impossible math problem, were they universally applauded?

Pamela Rogers: In this community, they were greatly applauded. Across the country, there were many naysayers.

Bill Whitaker: What were they saying?

Pamela Rogers: They were saying, "Oh, they could not have done it. African Americans don't have the brains to do it." Of course, we sheltered our girls from that. But we absolutely did not expect it to come in the volume that it came.

Bill Whitaker: And after such a wonderful achievement.

Pamela Rogers: People-- have a vision of who can be successful. And-- to some people, it is not always an African American female. And to us, it's always an African American female.

Gloria Ladson-Billings: What we know is when teachers lay out some expectations that say, "You can do this," kids will work as hard as they can to do it.

Gloria Ladson-Billings, professor emeritus at the University of Wisconsin, has studied how best to teach African American students. She told us an encouraging teacher can change a life.

Bill Whitaker: And what's the difference, say, between having a teacher like that and a whole school dedicated to the excellence of these students?

Gloria Ladson-Billings: So a whole school is almost like being in Heaven.

Bill Whitaker: What do you mean by that?

Gloria Ladson-Billings: Many of our young people have their ceilings lowered, that somewhere around fourth or fifth grade, their thoughts are, "I'm not going to be anything special." What I think is probably happening at St. Mary's is young women come in as, perhaps, ninth graders and are told, "Here's what we expect to happen. And here's how we're going to help you get there."

At St. Mary's, half the students get scholarships, subsidized by fundraising to defray the $8,000 a year tuition. Here, there's no test to get in, but expectations are high and rules are strict: no cellphones, modest skirts, hair must be its natural color.

Students Rayah Siddiq, Summer Forde, Carissa Washington, Tatum Williams and Christina Blazio told us they appreciate the rules and rigor.

Rayah Siddiq: Especially the standards that they set for us. They're very high. And I don't think that's ever going to change.

Bill Whitaker: So is there a heart, a philosophy, an essence to St. Mary's?

Summer Forde: The sisterhood—

Carissa Washington: Sisterhood.

Tatum Williams: Sisterhood.

Bill Whitaker: The sisterhood?

Voices: Yes.

Bill Whitaker: And you don't mean the nuns. You mean-- (laughter)

Christina Blazio: I mean, yeah. The community—

Bill Whitaker: So when you're here, there's just no question that you're going to go on to college.

Rayah Siddiq: College is all they talk about. (laughter)

Pamela Rogers: … and Arizona State University (Cheering)

Principal Rogers announces to her 615 students the colleges where every senior has been accepted.

Bill Whitaker: So for 17 years, you've had a 100% graduation rate—

Pamela Rogers: Yes.

Bill Whitaker: --and a 100% college acceptance rate?

Pamela Rogers: That's correct.

Last year when Ne'Kiya and Calcea graduated, all their classmates went to college and got scholarships. Ne'Kiya got a full ride to the pharmacy school at Xavier University in New Orleans. Calcea, the class valedictorian, is studying environmental engineering at Louisiana State University.

Bill Whitaker: So wait a minute. Neither one of you is going to pursue a career in math?

Both: No. (laugh)

Calcea Johnson: I may take up a minor in math. But I don't want that to be my job job.

Ne'Kiya Jackson: Yeah. People might expect too much out of me if (laugh) I become a mathematician. (laugh)

But math is not completely in their rear-view mirrors. This spring they submitted their high school proofs for final peer review and publication … and are still working on further proofs of the Pythagorean Theorem. Since their first two …

Calcea Johnson: We found five. And then we found a general format that could potentially produce at least five additional proofs.

Bill Whitaker: And you're not math geniuses?

Bill Whitaker: I'm not buying it. (laughs)

Produced by Sara Kuzmarov. Associate producer, Mariah B. Campbell. Edited by Daniel J. Glucksman.

Bill Whitaker is an award-winning journalist and 60 Minutes correspondent who has covered major news stories, domestically and across the globe, for more than four decades with CBS News.

## IMAGES

## VIDEO

## COMMENTS

At its core, an algorithm is a systematic, step-by-step procedure or set of rules designed to solve a problem or perform a specific task. It provides clear instructions that, when followed meticulously, lead to the desired outcome. Consider an algorithm to be akin to a recipe for your favorite dish.

Algorithms Tutorial. Algorithm is a step-by-step procedure for solving a problem or accomplishing a task. In the context of data structures and algorithms, it is a set of well-defined instructions for performing a specific computational task. Algorithms are fundamental to computer science and play a very important role in designing efficient ...

An algorithm is a step by step process that describes how to solve a problem in a way that always gives a correct answer. When there are multiple algorithms for a particular problem (and there often are!), the best algorithm is typically the one that solves it the fastest. ... As computer programmers, we are constantly using algorithms, whether ...

My approach to making algorithms compelling was focusing on comparisons. I take algorithms and put them in a scene from everyday life, such as matching socks from a pile, putting books on a shelf, remembering things, driving from one point to another, or cutting an onion. These activities can be mapped to one or more fundamental algorithms ...

At its core, an algorithm is a finite set of well-defined instructions for solving a specific problem or task. These instructions are executed in a predetermined sequence, often designed to transform input data into a desired output. ... Problem Solving: Algorithms provide structured approaches to problem-solving and help us tackle complex ...

Definition of Algorithm. The word Algorithm means " A set of finite rules or instructions to be followed in calculations or other problem-solving operations ". Or. " A procedure for solving a mathematical problem in a finite number of steps that frequently involves recursive operations".

Computational thinking is a problem-solving process in which the last step is expressing the solution so that it can be executed on a computer. However, before we are able to write a program to implement an algorithm, we must understand what the computer is capable of doing -- in particular, how it executes instructions and how it uses data.

Algorithms and computer programs are sometimes used interchangeably, but they refer to two distinct but interrelated concepts. An algorithm is a step-by-step instruction for solving a problem that is precise yet general. Computer programs are specific implementations of an algorithm in a specific programming language. In other words, the ...

1.1: Activity 1 - Introduction to Algorithms and Problem Solving. In this learning activity section, the learner will be introduced to algorithms and how to write algorithms to solve tasks faced by learners or everyday problems. Examples of the algorithm are also provided with a specific application to everyday problems that the learner is ...

Learn. We've partnered with Dartmouth college professors Tom Cormen and Devin Balkcom to teach introductory computer science algorithms, including searching, sorting, recursion, and graph theory. Learn with a combination of articles, visualizations, quizzes, and coding challenges.

Let's take some examples of algorithms for computer science problems. Example 1. Swap two numbers with a third variable. Step 1: Start. Step 2: Take 2 numbers as input. Step 3: Declare another variable as "temp". Step 4: Store the first variable to "temp". Step 5: Store the second variable to the First variable.

algorithmic problem solving rose in popularity with the largest competitions attracting tens of thousands of programmers. While its mathematical coun-terpart has a rich literature, there are only a few books on algorithms with a strong problem solving focus. The purpose of this book is to contribute to the literature of algorithmic prob-

of the problem-solving process. Given a problem, a computer scientist's goal is to develop an algorithm, a step-by-step list of instructions for solving any instance of the problem that might arise. Algorithms are ﬁnite processes that if followed will solve the problem. Algorithms are solutions.

Algorithms and heuristics. Other means of solving problems incorporate procedures associated with mathematics, such as algorithms and heuristics, for both well- and ill-structured problems.Research in problem solving commonly distinguishes between algorithms and heuristics, because each approach solves problems in different ways and with different assurances of success.

An algorithm is a plan for solving a problem. A person must design an algorithm. A person must translate an algorithm into a computer program. This point of view sets the stage for a process that we will use to develop solutions to Jeroo problems. The basic process is important because it can be used to solve a wide variety of problems ...

In psychology, one of these problem-solving approaches is known as an algorithm. While often thought of purely as a mathematical term, the same type of process can be followed in psychology to find the correct answer when solving a problem or making a decision. An algorithm is a defined set of step-by-step procedures that provides the correct ...

Join over 23 million developers in solving code challenges on HackerRank, one of the best ways to prepare for programming interviews.

A method of representing the step-by-step logical procedure for solving a problem. Flowchart is diagrammatic representation of an algorithm. It is constructed using different types of boxes and symbols. 2. It contains step-by-step English descriptions, each step representing a particular operation leading to solution of problem.

Problem-solving involves taking certain steps and using psychological strategies. Learn problem-solving techniques and how to overcome obstacles to solving problems. ... Algorithms . An algorithm is a step-by-step procedure that, by following certain "rules" produces a solution. Algorithms are commonly used in mathematics to solve division or ...

An algorithm is a step-by-step problem-solving strategy based on a formula guaranteed to give you positive results. For example, you might use an algorithm to determine how much food is needed to ...

Problem-solving is a mental process that involves discovering, analyzing, and solving problems. The ultimate goal of problem-solving is to overcome obstacles and find a solution that best resolves the issue. The best strategy for solving a problem depends largely on the unique situation. In some cases, people are better off learning everything ...

Types of Algorithms. Brute Force Algorithm: A brute force algorithm is the simplest approach to solving a problem. It involves systematically trying every possible solution until a satisfactory result is obtained. While brute force algorithms may not be the most efficient, they serve as a starting point for problem-solving and can help understand the problem space.

Problem Solving with Algorithms and Data Structures using Python¶ By Brad Miller and David Ranum, Luther College. Assignments; There is a wonderful collection of YouTube videos recorded by Gerry Jenkins to support all of the chapters in this text.

In the early 20th century, the word algorithm came into its current definition and usage: "a procedure for solving a mathematical problem in a finite number of steps; a step-by-step procedure ...

Gain a clear understanding of the problem statement and requirements. Algorithm Design: Develop the algorithmic steps necessary to solve the problem. Pseudocode Writing: Express the algorithm in pseudocode using plain language and recognized conventions. Review and refine: Review the pseudocode for clarity, correctness, and potential improvements.

In this study, the Learning Search Algorithm (LSA) is introduced as an innovative optimization algorithm that draws inspiration from swarm intelligence principles and mimics the social learning behavior observed in humans. The LSA algorithm optimizes the search process by integrating historical experience and real-time social information, enabling it to effectively navigate complex problem ...

View PDF Abstract: While effective in practice, iterative methods for solving large systems of linear equations can be significantly affected by problem-dependent condition number quantities. This makes characterizing their time complexity challenging, particularly when we wish to make comparisons between deterministic and stochastic methods, that may or may not rely on preconditioning and/or ...

In the paper, we compute some lower bounds on time of parallel solving of the subset sum problem on a big number of processors by several versions of dynamic programming algorithm Balsub proposed before by Pisinger. Based on these lower bounds, ...

"The two main methods for solving COPs with quantum devices are variational scheduling and post-processing. Our algorithm combines variational scheduling with a post-processing method that ...

A high school teacher didn't expect a solution when she set a 2,000-year-old Pythagorean Theorem problem in front of her students. Then Calcea Johnson and Ne'Kiya Jackson stepped up to the challenge.