Genetic Algorithms  Introduction
Tutorial on genetic algorithms
20201120 00:50:08
Genetic Algorithms  Introduction
Genetic Algorithm (GA) is a researchbased optimization technique based on the principles of genetics and natural selection . It is frequently used to find optimal or nearoptimal solutions to difficult problems that would otherwise take a lifetime to solve. It is frequently used to solve optimization problems, in research and in machine learning.
Introduction to Optimization
Optimization is the process of improving something . In any process, we have a set of inputs and a set of outputs as shown in the following figure.
Optimization refers to finding the values of the inputs so as to get the "best" output values. The definition of "best"ur "varies from problem to problem, but in mathematical terms it refers to the maximization or minimization of one or more objective functions, by varying the input parameters.
The set of all possible solutions or values that the entries can occupy constitute the search space. In this search space, there is a point or a set of points which gives the optimal solution. The goal of optimization is to find this point or set of points in the search space.
What are genetic algorithms?
Nature has always been a great source of inspiration for all mankind. Genetic Algorithms (GA) are researchbased algorithms based on the concepts of natural selection and genetics. GAs are a subset of a much larger branch of calculus known as Evolutionary Calculus .
GA were developed by John Hollandand his students and colleagues at the University of Michigan, notably David E. Goldberg and has since been tried on various optimization problems with a high degree of success.
In GA, we have a pool or population of possible solutions to the given problem. These solutions have then undergone recombination and mutation (as in natural genetics), producing new children, and the process is repeated over several generations. Each individual (or candidate solution) is assigned a fitness value (based on their objective function value) and fitter individuals have a greater chance of mating and producing more "fit" individuals. This is in accordance with Darwin's theory of "survival of the fittest".
In this way, we continue to "evolve" better individuals or solutions over generations, until we reach a stop criterion.
Genetic algorithms are dThey are sufficiently random in nature, but they work much better than local random search (in which we just try
Advantages of GAs
GAs have

Requires no derivative information (which may not be available for many real world issues).

Is faster and more efficient than traditional methods.

Has very good parallel abilities.

Optimizes continuous and discrete functions as well as multiobjective problems.

Provides a list of "good" solutions, not just a single solution.

Always get an answer to the problem, which improves over times.

Useful when the search space is very large and there are a large number of parameters involved.
Limitations of GAs
Like any technique, GAs also suffer from some limitations. These include 

GAs are not suitable for all problems, especially problems that are simple and for which derived information is needed. available.

The fitness value is calculated multiple times, which can be computationally expensive for some issues.

Being stochastic, there is no guarantee on the optimality or quality of the solution.

If it is not implemented correctly, the GA may not converge to the optimal solution.
GA  Motivation
Genetic algorithms have the capacity to provide a "good enough" "fairly quick" solution. This makes genetic algorithmsinteresting for use in solving optimization problems. The reasons why GAs are needed are as follows 
Solving difficult problems
In computer science there are a large number of problems, which are NPHard . This essentially means that even the most powerful computer systems take a long time (if not years!) To fix this problem. In such a scenario, GAs prove to be an effective tool in delivering nearoptimal usable solutions in a short period of time.
Gradientbased methods fail
Traditional calculationbased methods work by starting from a random point and moving in the direction of the gradient, up to until we reached the top of the hill. This technique is efficient and works very well for single peak lens functions like the cost function.ns linear regression. But, in most real world situations we have a very complex problem called landscapes, which are made up of many peaks and many valleys, which causes these methods to fail because they suffer from a tendency inherent in getting stuck in the local optima. as shown in the following figure.
Getting a good solution quickly
Some difficult problems, like the traveling salesman problem (TSP), have applications like the path finder and the VLSI design Now imagine that you are using your GPS navigation system, and it takes a few minutes (or even hours) to calculate the "optimal" path from source to destination. delay in such real world applications is not acceptable and therefore a "good enough" solution, which is delivered "quickly" is what is needed.area.