No Description

Andrea Fazzi 513451af0b First commit 2 years ago
.codeclimate.yml 513451af0b First commit 2 years ago
.travis.yml 513451af0b First commit 2 years ago
CODE_OF_CONDUCT.md 513451af0b First commit 2 years ago
CONTRIBUTING.md 513451af0b First commit 2 years ago
LICENSE 513451af0b First commit 2 years ago
README.md 513451af0b First commit 2 years ago
benchmark_test.go 513451af0b First commit 2 years ago
crossover.go 513451af0b First commit 2 years ago
crossover_test.go 513451af0b First commit 2 years ago
distance.go 513451af0b First commit 2 years ago
distance_test.go 513451af0b First commit 2 years ago
ga.go 513451af0b First commit 2 years ago
ga_test.go 513451af0b First commit 2 years ago
genome.go 513451af0b First commit 2 years ago
individual.go 513451af0b First commit 2 years ago
individual_test.go 513451af0b First commit 2 years ago
individuals.go 513451af0b First commit 2 years ago
individuals_test.go 513451af0b First commit 2 years ago
initialization.go 513451af0b First commit 2 years ago
initialization_test.go 513451af0b First commit 2 years ago
migration.go 513451af0b First commit 2 years ago
migration_test.go 513451af0b First commit 2 years ago
models.go 513451af0b First commit 2 years ago
models_test.go 513451af0b First commit 2 years ago
mutation.go 513451af0b First commit 2 years ago
mutation_test.go 513451af0b First commit 2 years ago
population.go 513451af0b First commit 2 years ago
presets.go 513451af0b First commit 2 years ago
presets_test.go 513451af0b First commit 2 years ago
selection.go 513451af0b First commit 2 years ago
selection_test.go 513451af0b First commit 2 years ago
setup_test.go 513451af0b First commit 2 years ago
slice.go 513451af0b First commit 2 years ago
slice_test.go 513451af0b First commit 2 years ago
speciation.go 513451af0b First commit 2 years ago
speciation_test.go 513451af0b First commit 2 years ago
util.go 513451af0b First commit 2 years ago
util_random.go 513451af0b First commit 2 years ago
util_random_test.go 513451af0b First commit 2 years ago
util_test.go 513451af0b First commit 2 years ago

README.md

logo



An extensible toolkit for conceiving and running genetic algorithms


Table of Contents

Example

The following example attempts to minimize the Drop-Wave function which is known to have a minimum value of -1.

drop_wave_chart drop_wave_function
package main

import (
    "fmt"
    m "math"
    "math/rand"

    "github.com/MaxHalford/gago"
)

// A Vector contains float64s.
type Vector []float64

// Evaluate a Vector with the Drop-Wave function which takes two variables as
// input and reaches a minimum of -1 in (0, 0).
func (X Vector) Evaluate() float64 {
    var (
        numerator   = 1 + m.Cos(12*m.Sqrt(m.Pow(X[0], 2)+m.Pow(X[1], 2)))
        denominator = 0.5*(m.Pow(X[0], 2)+m.Pow(X[1], 2)) + 2
    )
    return -numerator / denominator
}

// Mutate a Vector by resampling each element from a normal distribution with
// probability 0.8.
func (X Vector) Mutate(rng *rand.Rand) {
    gago.MutNormalFloat64(X, 0.8, rng)
}

// Crossover a Vector with another Vector by applying uniform crossover.
func (X Vector) Crossover(Y gago.Genome, rng *rand.Rand) (gago.Genome, gago.Genome) {
    var o1, o2 = gago.CrossUniformFloat64(X, Y.(Vector), rng) // Returns two float64 slices
    return Vector(o1), Vector(o2)
}

// Clone a Vector to produce a new one that points to a different slice.
func (X Vector) Clone() gago.Genome {
    var Y = make(Vector, len(X))
    copy(Y, X)
    return Y
}

// VectorFactory returns a random vector by generating 2 values uniformally
// distributed between -10 and 10.
func VectorFactory(rng *rand.Rand) gago.Genome {
    return Vector(gago.InitUnifFloat64(2, -10, 10, rng))
}

func main() {
    var ga = gago.Generational(VectorFactory)
    ga.Initialize()

    fmt.Printf("Best fitness at generation 0: %f\n", ga.Best.Fitness)
    for i := 1; i < 10; i++ {
        ga.Enhance()
        fmt.Printf("Best fitness at generation %d: %f\n", i, ga.Best.Fitness)
    }
}

>>> Best fitness at generation 0: -0.550982
>>> Best fitness at generation 1: -0.924220
>>> Best fitness at generation 2: -0.987282
>>> Best fitness at generation 3: -0.987282
>>> Best fitness at generation 4: -0.987282
>>> Best fitness at generation 5: -0.987282
>>> Best fitness at generation 6: -0.987282
>>> Best fitness at generation 7: -0.997961
>>> Best fitness at generation 8: -0.999954
>>> Best fitness at generation 9: -0.999995

More examples

Background

There is a lot of intellectual fog around the concept of genetic algorithms (GAs). It's important to appreciate the fact that GAs are composed of many nuts and bolts. There isn't a single definition of genetic algorithms. gago is intended to be a toolkit where one may run many kinds of genetic algorithms, with different evolution models and various genetic operators.

Terminology

  • Fitness function: The fitness function is simply the function associated to a given problem. It takes in an input and returns an output.
  • Individual: An individual contains a genome which represents a candidate solution. In the physical world, an individual's genome is composed of acids. In an imaginary world, it could be composed of floating point numbers or string sequences representing cities. A fitness can be associated to a genome thanks to the fitness function. For example, one could measure the height of a group of individuals in order to rank them. In this case the genome is the body of the individual, the fitness function is the act of measuring the height of the individual's body and the fitness is the height of individual measured by the fitness function.
  • Population: Individuals are contained in a population wherein they can interact.
  • Crossover: A crossover acts on two or more individuals (called parents) and mixes their genome in order to produce one or more new individuals (called offsprings). Crossover is really what sets genetic algorithms apart from other evolutionary methods.
  • Selection: Selection is a process in which parents are selected to generate offsprings, most often by applying a crossover method. Popular selection methods include elitism selection and tournament selection.
  • Mutation: Mutation applies random modifications to an individual's genome without interacting with other individuals.
  • Migration: Multi-population GAs run more than one population in parallel and exchange individuals between each other.
  • Speciation: In the physical world, individuals do not mate at random. Instead, they mate with similar individuals. For some problems such as neural network topology optimization, crossover will often generate poor solutions. Speciation sidesteps this by mating similar individuals (called species) separately.
  • Evolution model: An evolution model describes the exact manner and order in which genetic operators are applied to a population. The most popular models are the steady state model and the generational model.

Methodology

In a nutshell, a GA solves an optimization problem by doing the following:

  1. Generate random solutions.
  2. Evaluate the solutions.
  3. Sort the solutions according to their evaluation score.
  4. Apply genetic operators following a model.
  5. Repeat from step 2 until the stopping criterion is not satisfied.

This description is voluntarily vague as to how the genetic operators are applied. It's important to understand that there isn't a single way of applying genetic algorithms. For example some people believe that crossover is useless and use mutation for generating new individuals. Genetic operators are applied following a model, a fact that is often omitted in introductions to genetic algorithms. Popular stopping criteria include

  • a fixed number of generations,
  • a fixed duration,
  • an indicator that the population is stagnating.

Features

  • gago is extensible, you can control most of what's happening
  • Different evolution models are available
  • Popular operators are already implemented
  • Speciation is available
  • Multiple population migration is available

Usage

The two requirements for using gago are

  • Implement the Genome interface.
  • Instantiate a GA struct.

The Genome interface is used define the logic that is specific to your problem; logic that gago doesn't know about. For example this is where you will define an Evaluate() method for evaluating a particular problem. The GA struct contains context-agnostic information. For example this is where you can choose the number of individuals in a population (which is a separate concern from your particular problem).

Implementing the Genome interface

Let's have a look at the Genome interface.

type Genome interface {
    Evaluate() float64
    Mutate(rng *rand.Rand)
    Crossover(genome Genome, rng *rand.Rand) (Genome, Genome)
    Clone() Genome
}

The Evaluate() method assigns a score to a given genome. The sweet thing is that you can do whatever you want in this method. Your struct that implements the interface doesn't necessarily have to be a slice (which is a common representation). The Evaluate() method is your problem to deal with, gago only needs it's output to be able to function.

The Mutate(rng *rand.Rand) method is where you can mutate a solution by tinkering with it's variables. The way in which you should mutate a solution essentially boils down to your particular problem. gago provides some common mutation methods that you can use to not reinvent the wheel; this is what is being done in most of the provided examples.

The Crossover(genome Genome, rng *rand.Rand) (Genome, Genome) method produces two new individuals (called offsprings) by applying some kind of mixture between the parent's attributes. The important thing to notice is that the type of first argument differs from the struct calling the method. Indeed the first argument is a Genome that has to be casted into your struct before being able to apply a crossover operator. This is due to the fact that Go doesn't provide generics out of the box; it's easier to convince yourself by checking out the examples.

The Genome() method is there to produce independent copies of the struct you want to evolve. This is necessary for internal reasons and ensures that pointer fields are not pointing to same values memory addresses. Usually this is not too difficult implement; you just have to make sure that the clones you produce are totally independent from the genome they have been produced with. This is also not too difficult to unit test.

Once you have implemented the Genome you have provided gago with all the information it couldn't guess for you. Essentially you have total control over the definition of your problem, gago will handle the rest and find a good solution to the problem.

Using the Slice interface

Classically GAs are used to optimize problems where the genome has a slice representation - eg. a vector or a sequence of DNA code. Almost all the mutation and crossover algorithms gago makes available are based on the Slice interface which has the following definition.

type Slice interface {
    At(i int) interface{}
    Set(i int, v interface{})
    Len() int
    Swap(i, j int)
    Slice(a, b int) Slice
    Split(k int) (Slice, Slice)
    Append(Slice) Slice
    Replace(Slice)
    Copy() Slice
}

Internally IntSlice, Float64Slice and StringSlice implement this interface so that you can use the available operators for most use cases. If however you wish to use the operators which slices of a different type you will have to implement the Slice interface. Although there are many methods to implement, they are all trivial (have a look at slice.go and the TSP example.

Instantiating a GA struct

Let's have a look at the GA struct.

type GA struct {
    // Fields that are provided by the user
    GenomeFactory   GenomeFactory
    NPops        int
    PopSize      int
    Model        Model
    Migrator     Migrator
    MigFrequency int // Frequency at which migrations occur
    Speciator    Speciator
    Logger       *log.Logger
    Callback     func(ga *GA)

    // Fields that are generated at runtime
    Populations Populations
    Best        Individual // Overall best individual (dummy initialization at the beginning)
    Age         time.Duration
    Generations int
    rng         *rand.Rand
}

You have to fill in the first 5 fields, the rest are generated when calling the GA's Initialize() method.

  • GenomeFactory is a method that returns a random genome that you defined in the previous step. gago will use this method to produce an initial population. Again, gago provides some methods for common random genome generation.
  • NPops determines the number of populations that will be used.
  • PopSize determines the number of individuals inside each population.
  • Model determines how to use the genetic operators you chose in order to produce better solutions, in other words it's a recipe. A dedicated section is available in the model section.
  • Migrator and MigFrequency should be provided if you want to exchange individuals between populations in case of a multi-population GA. If not the populations will be run independently. Again this is an advanced concept in the genetic algorithms field that you shouldn't deal with at first.
  • Speciator will split each population in distinct species at each generation. Each specie will be evolved separately from the others, after all the species has been evolved they are regrouped.
  • Logger is optional and provides basic population statistics, you can read more about it in the logging section.
  • Callback is optional will execute any piece of code you wish every time ga.Enhance() is called. Callback will also be called when ga.Initialize() is. Using a callback can be useful for many things:
    • Calculating specific population statistics that are not provided by the logger
    • Changing parameters of the GA after a certain number of generations
    • Monitoring for converging populations

Essentially, only GenomeFactory, NPops, PopSize and Model are required to initialize and run a GA.

Running a GA

Once you have implemented the Genome interface and instantiated a GA struct you are good to go. You can call the GA's Enhance method which will apply a model once (see the models section). It's your choice if you want to call Enhance method multiple by using a loop or by imposing a time limit. The Enhance method will return an error which you should handle. If your population is evolving when you call Enhance it's most likely because Enhance did not return a nil error.

At any time you have access to the GA's Best field which is an internal representation of your genome. The Best field itself contains a Fitness field and a Genome field respectively indicating the best obtained solution and the parameters of that solution.

Models

gago makes it easy to use different so called models. Simply put, a models tells the story of how a GA enhances a population of individuals through a sequence of genetic operators. It does so without considering whatsoever the underlying operators. In a nutshell, an evolution model attempts to mimic evolution in the real world. It's extremely important to choose a good model because it is usually the highest influence on the performance of a GA.

Generational model

The generational model is one the, if not the most, popular models. Simply put it generates n offsprings from a population of size n and replaces the population with the offsprings. The offsprings are generated by selecting 2 individuals from the population and applying a crossover method to the selected individuals until the n offsprings have been generated. The newly generated offsprings are then optionally mutated before replacing the original population. Crossover generates two new individuals, thus if the population size isn't an even number then the second individual from the last crossover (individual n+1) won't be included in the new population.

generational

Steady state model

The steady state model differs from the generational model in that the entire population isn't replaced between each generations. Instead of adding the children of the selected parents into the next generation, the 2 best individuals out of the two parents and two children are added back into the population so that the population size remains constant. However, one may also replace the parents with the children regardless of their fitness. This method has the advantage of not having to evaluate the newly generated offsprings. Whats more, crossover often generates individuals who are sub-par but who have a lot of potential; giving individuals generated from crossover a chance can be beneficial on the long run.

steady-state

Select down to size model

The select down to size model uses two selection rounds. The first one is similar to the one used in the generational model. Parents are selected to generate new individuals through crossover. However, the offsprings are then merged with the original population and a second selection round occurs to determine which individuals will survive to the next generation. Formally m offsprings are generated from a population of n, the n+m individuals are then "selected down to size" so that there only remains n individuals.

select-down-to-size

Ring model

In the ring model, crossovers are applied to neighbours in a one-directional ring topology. Two by the two neighbours generate 2 offsprings. The best out of the 4 individuals (2 parents + 2 offsprings) replaces the first neighbour.

ring

Simulated annealing

Although simulated annealing isn't a genetic algorithm, it can nonetheless be implemented with gago. A mutator is the only necessary operator. Other than that a starting temperature, a stopping temperature and a decrease rate have to be provided. Effectively a single simulated annealing is run for each individual in the population.

The temperature evolution is relative to one single generation. In order to mimic the original simulated annealing algorithm, one would the number of individuals to 1 and would run the algorithm for only 1 generation. However, nothing stops you from running many simulated annealings and to repeat them over many generations.

Mutation only

It's possible to run a GA without crossover simply by mutating individuals. Essentially this boils down to doing hill climbing because there is not interaction between individuals. Indeed taking a step in hill climbing is equivalent to mutation for genetic algorithms. What's nice is that by using a population of size n you are essentially running multiple independent hill climbs.

Speciation

Clusters, also called speciation in the literature, are a partitioning of individuals into smaller groups of similar individuals. Programmatically a cluster is a list of lists each containing individuals. Individuals inside each species are supposed to be similar. The similarity depends on a metric, for example it could be based on the fitness of the individuals. In the literature, speciation is also called speciation.

The purpose of a partitioning individuals is to apply genetic operators to similar individuals. In biological terms this encourages "incest" and maintains isolated species. For example in nature animals usually breed with local mates and don't breed with different animal species.

Using speciation/speciation with genetic algorithms became "popular" when they were first applied to the optimization of neural network topologies. By mixing two neural networks during crossover, the resulting neural networks were often useless because the inherited weights were not optimized for the new topology. This meant that newly generated neural networks were not performing well and would likely disappear during selection. Thus speciation was introduced so that neural networks evolved in similar groups so that new neural networks wouldn't disappear immediately. Instead the similar neural networks would evolve between each other until they were good enough to mixed with the other neural networks.

With gago it's possible to use speciation on top of all the rest. To do so the Speciator field of the GA struct has to specified.

speciation

Multiple populations and migration

Multi-populations GAs run independent populations in parallel. They are not frequently used, however they are very easy to understand and to implement. In gago a GA struct contains a Populations field which contains each population. The number of populations is specified in the GA's Topology field.

If Migrator and MigFrequency are not provided the populations will be run independently, in parallel. However, if they are provided then at every generation number that divides MigFrequency individuals will be exchanged between the populations following the Migrator protocol.

Using multi-populations can be an easy way to gain in diversity. Moreover, not using multi-populations on a multi-core architecture is a waste of resources.

With gago you can use multi-populations and speciation at the same time. The following flowchart shows what that would look like.

multi-population_and_speciation

Presets

Some preset GA instances are available to get started as fast as possible. They are available in the presets.go file. These instances also serve as example instantiations of the GA struct. To obtain optimal solutions you should fill in the fields manually!

Logging population statistics

It's possible to log statistics for each population at every generation. To do so you simply have to provide the GA struct a Logger from the Go standard library. This is quite convenient because it allows you to decide where to write the log output, whether it be in a file or directly in the standard output.

ga.Logger = log.New(os.Stdout, "", log.Ldate|log.Ltime)

If a logger is provided, each row in the log output will include

  • the population ID,
  • the population minimum fitness,
  • the population maximum fitness,
  • the population average fitness,
  • the population's fitness standard deviation.

A note on parallelism

Genetic algorithms are famous for being embarrassingly parallel. Most of the operations used in the GA can be run independently each one from another. For example individuals can be mutated in parallel because mutation doesn't have any side effects.

One approach I considered was to run the individual operations in parallel. Basically a parallel loop would apply all the necessary operations to a set of individuals. First of all this isn't as simple as it seems, the prime issue being the race condition that can occur when applying crossover. Moreover the initialization overhead was relatively too large, mainly because mutation and evaluation can be too fast for a thread to be viable.

The current approach is to run the populations in parallel. This works very nicely because the only non-parallel operation is migration; all the other operations are population specific and no communication between populations has to be made. Basically if you have n cores, then running a GA with n populations will take the same time as running it for 1.

FAQ

What if I don't want to use crossover?

Alas you still have to implement the Genome interface. You can however provide a blank Crossover method just to satisfy the interface.

type Vector []float64

func (X Vector) Crossover(Y gago.Genome, rng *rand.Rand) (gago.Genome, gago.Genome) {
    return X, Y.(Vector)
}

Why isn't my Mutate method modifying my Genome?

The Mutate has to modify the values of the Genome in-place. The following code will work because the Vector is a slice; slices in Go are references to underlying data, hence modifying a slice modifies them in-place.

type Vector []float64

func (X Vector) Mutate(rng *rand.Rand) {
    gago.MutNormal(X, rng, 0.5)
}

On the contrary, mutating other kind of structs will require the * symbol to access the struct's pointer. Notice the *Name in the following example.

type Name string

func (n *Name) Mutate(rng *rand.Rand) {
    n = randomName()
}

When are genetic algorithms good to apply?

Genetic algorithms (GAs) are often used for NP-hard problems. They usually perform better than hill climbing and simulated annealing because they explore the search space more intelligently. However, GAs can also be used for classical problems where the search space makes it difficult for, say, gradient algorithms to be efficient (like the introductory example).

As mentioned earlier, some problems can simply not be written down as closed-form expressions. For example tuning the number of layers and of neurons per layer in a neural network is an open problem that doesn't yet have a reliable solution. Neural networks architectures used in production are usually designed by human experts. The field of neuroevolution aims to train neural networks with evolutionary algorithms. As such genetic algorithms are a good candidate for training neural networks, usually by optimizing the network's topology.

How can I contribute?

Feel free to implement your own operators or to make suggestions! Check out the CONTRIBUTING file for some guidelines.

Dependencies

You can see the list of dependencies here and a graph view here. Here is the list of external dependencies:

License

The MIT License (MIT). Please see the license file for more information.