# Backstory

Recently, I was competing with my friend to see who create the AI that could walk the best. The video is here:

However, while working on my AI, my basic genetic algorithm was failing to produce desirable results. So, I decided to turn to an amazing neuroevolution algorithm called NEAT:

The

**N** eurological

**E** volution

of

**A** ugmenting

**T** opoligies

# Introduction

NEAT in a nutshell, simulates real life evolution by evolving neural networks with unique structures. These structures are known as *topologies*. A neural network's topology is defined as the structure of its layers and how its neurons connect to each other.

Conventional genetic algorithms only support evolving weights/biases of neural nets - or changing the *strength* of connections between neurons. Whereas NEAT can add a entirely *new* connection or node.

So, after reading/skimming this paper and looking over Code Bullet's amazing implementation of NEAT in JavaScript, I set out to build NEAT myself.

NOTE: This article is not a tutorial - it documents my attempt to build NEAT. The end result works functionally, but is incapable of solving the conventional benchmark of XOR. I would not recommend using my version of NEAT in an of your own projects.

# Initial Setup

First, I defined a few helper functions:

```
//gets a random number between 0 and 1 centered on 0.5
function gaussianRand() {
var rand = 0;
for (var i = 0; i < 6; i += 1) {
rand += Math.random();
}
return rand / 6;
}
// takes the range of guassianRand and makes it [-1, 1]
function std0() {
return (gaussianRand() - 0.5) * 2;
}
// gets a random number from a uniform distribution across [min, max]
function random(min, max) {
return min + Math.random() * (max - min);
}
// the sigmoid function (squishes a number into the range 0 to 1)
const sigmoid = (val) => 1 / (1 + Math.exp(-val));
```

Then, I had to build a Neural Network "class" that could feed forward inputs and deal with the flexible NN architecture that can emerge when using NEAT. Other than inputs and outputs, there are no defined "layers" in NEAT neural nets. There are only hidden neurons that can connect to each other in any variety of ways.

```
function NN({
nodeGenes,
connectionGenes
})
```

(Note that I use the ice factory pattern to create my "classes" - I don't use conventional JS class syntax.)

Each Neural Network is created with an array of `nodeGenes`

, and an array of `connectionGenes`

.

Each nodeGene (node genes represent neurons) is of the following structure:

```
{
id: number,
type: "input" | "hidden" | "output"
}
```

Each connectionGene (connection genes represent weights) is of the following structure:

```
{
in: number, // the id of the node that feeds into the connection
out: number, // the id of the node that the connection feeds to
enabled: boolean,
innov: number, // will be explained later
weight: number // the weight of the connection
}
```

Anyway, back to the neural nets. Upon creation, each neural net creates its own "storage", where the value of each node is stored.

```
let storage = [...nodeGenes.map(gene => ({...gene, value: 0}))].sort((a, b) => {
if (a.type === b.type) {
return a.id - b.id;
} else if (a.type === "input") {
return -1;
} else if (a.type === "output") {
return 1;
} else if (b.type === "input") {
return 1;
} else if (b.type === "output") {
return - 1;
}
});
```

The storage is sorted in the order of the nodes id's, and their type. Input nodes are at the beginning of the storage, hidden are in the middle, and output nodes are at the end. Additionally, in storage, each node is given a `value`

attribute to represent its current state.

Using this, we can define the feedForward function:

```
feedForward(input) {
// assign all input nodes to the values provided
storage.filter(({ type }) => type === "input").forEach((node, i) => node.value = input[i]);
// evaluate each node of the network
storage.filter(({ type }) => type !== "input").forEach((node) => {
// figure out which connections are feeding into this node
const ins = connectionGenes.filter(({ enabled }) => enabled).filter(({ out }) => out === node.id);
ins.forEach(i => {
// add each connections weight times the its input neurons value to this neurons value
node.value += storage.find(({ id }) => id === i.in).value * i.weight;
})
// sigmoid the value of the neuron (or use any other activation function)
node.value = sigmoid(node.value);
})
// compile the values of the outputs into an array, sorted by node id.
const outputs = storage.filter(({ type }) => type === "output").sort((a, b) => a.id - b.id).map(node => node.value);
// clear the states of all the nodes
storage.forEach(node => {
node.value = 0;
});
// return the output array, having completed the feedForward process
return outputs;
}
```

So, in total, the structure of our code currently looks like this:

```
function NN({
nodeGenes,
connectionGenes
}) {
let storage = ...;
return {
feedForward(inputs) {
...
}
}
}
```

# Mutations

Now, let's work on making it so these neural networks can mutate. There are three kind of important mutations in NEAT:

- Mutating Weights: Mutating the weights of the already existing connections of the neural network
- Adding A Connection: Adds a connection between two unconnected neurons in the network.
- Adding A Node: Splits an existing connection into two new ones, while adding a node as an intermediary.

The simplest kind of mutation is mutating the weights, so let's start there:

```
mutateWeights() {
// for each connection
connectionGenes.forEach(gene => {
const seed = Math.random();
// 10% chance to completely mutate the weight
if (seed < 0.1) {
gene.weight = random(-1, 1);
} else {
// otherwise just modify the weight by a little
gene.weight += std0() / 10;
}
})
}
```

The next mutation is adding a new connection. The method for this is simple: iterate over all possible pairs of nodes (in a random order, find the first pair where there is no connection, and add one. However, the code is a bit verbose:

```
addConnection() {
let connectionFound = false;
// for each node
[...nodeGenes].sort(() => Math.random() - 0.5).forEach(node1 => {
// check all nodes
[...nodeGenes].sort(() => Math.random() - 0.5).forEach(node2 => {
// if first node can connect with second node
if ((node1.type === "input" && node2.type === "hidden") || (node1.type === "input" && node2.type === "output") || (node1.type === "hidden" && node2.type === "hidden") || (node1.type === "hidden" && node2.type === "output")) {
// if there hasn't already been a connection made with this function
if (!connectionFound && (node1 !== node2)) {
//check if a connection exists between the two nodes
const isConnection = connectionGenes.some(gene => {
return (gene.in === node1.id && gene.out === node2.id) || (gene.in === node2.id && gene.out === node1.id);
});
// if one doesn't, create one
if (!isConnection) {
let c;
// make sure the connection places the hidden node with the lower id as its input and the one with the higher id as its output
if (node1.id > node2.id && node1.type === "hidden" && node2.type === "hidden") {
c = {
innov: ++innovationNumber // will be explained later,
in: node2.id,
out: node1.id,
enabled: true,
weight: random(-1, 1) // random weight
};
} else {
c = {
innov: ++innovationNumber // will be explained later,
in: node1.id,
out: node2.id,
enabled: true,
weight: random(-1, 1) // random weight
};
}
// add connection to network
connectionGenes.push(c);
// stop looking for connections
connectionFound = true;
}
}
}
})
})
}
```

Finally, the last mutation is when you add a node by splitting an already existing connection. So if node 3 connected to node 6:

3 -> 6

An add node mutation would make it like this:

3 -> 7 -> 6

The code for such a mutation is surprisingly simple:

```
addNode() {
// choose a random connection
const chosen = connectionGenes[Math.floor(Math.random() * connectionGenes.length)]
if (chosen) {
//disable the old connection
chosen.enabled = false;
// create a new node with a unique id
const newNode = {
type: "hidden",
id: Math.max(...nodeGenes.map(node => node.id)) + 1
}
nodeGenes.push(newNode);
// create a connection from the old input node to the new node
connectionGenes.push({
innov: ++innovationNumber,
in: chosen.in,
out: newNode.id,
enabled: true,
weight: random(-1, 1)
});
// create a connection from the new node to the old output node
connectionGenes.push({
innov: ++innovationNumber,
in: newNode.id,
out: chosen.out,
enabled: true,
weight: random(-1, 1)
});
// add new node into storage
storage = [...nodeGenes.map(gene => ({
...gene,
value: 0
}))].sort((a, b) => {
if (a.type === b.type) {
return a.id - b.id;
} else if (a.type === "input") {
return -1;
} else if (a.type === "output") {
return 1;
} else if (b.type === "input") {
return 1;
} else if (b.type === "output") {
return -1;
}
});
}
}
```

# Crossover

A central part of genetic algorithms is the *crossover* of two agents - in NEAT, the challenge of successfully crossing over two topologically different neural networks appears to be a daunting one. However, the initial paper on NEAT introduced a revolutionary (yet simple) concept that solves this problem: *innovation numbers*.

## Innovation Numbers

Whenever a new connection is added to a neural network in NEAT, it is given an innovation number. The given innovation number starts at 0, and then for every innovation number given, it advances by one. So a connection with an innovation number of 8 is the 7th connection to ever be created in that run of NEAT.

The critical thing is that when connections are passed on in crossover/mutation, *they preserve their innovation numbers*. So, via the innovation number, you can know whether a connection in one net is related to one in another. If two connections have the same innovation number, then those connections share a common ancestor.

## Crossover Mechanics

We can use innovation numbers to figure out how to crossover connections. Let's say we're crossing over Net A and Net B, to form Net C. Net B has higher fitness than Net A, so Net C inherits Net B's topology (hidden nodes, connections etc.). However, for connections where Net A and Net B have the same innovation numbers, there is a 50% chance for C to get the connection from Net A, and a 50% chance for C to get the connection from Net B. This is the actual code for the crossover algorithm:

```
crossover(otherNet) {
// new net inherits topology of net calling .crossover (will always be the fitter one)
const newNodeGenes = nodeGenes.map(gene => ({
...gene
}));
const newConnectionGenes = connectionGenes.map(gene => {
// check if there is a matching connection (same innovation number) from otherNet
const otherGene = otherNet.connectionGenes.find(g => g.innov === gene.innov)
if (otherGene) { // if there is
let toEnable = true;
if (!gene.enabled || !otherGene.enabled) {
// if one of the parents connections is disabled, there is a 75% chance of the new one being disabled too
if (Math.random() < 0.75) {
toEnable = false;
}
}
// randomly select connection from this net or otherNet
if (Math.random() < 0.5) {
return {
...otherGene,
enabled: toEnable
};
} else {
return {
...gene,
enabled: toEnable
};
}
}
// if there is no matching connection, just use this net's connection
return {
...gene
};
})
// create a new network with the newNodeGenes and newConnectionGenes
return NN({
nodeGenes: newNodeGenes,
connectionGenes: newConnectionGenes
});
}
```

So, at the end of all of this, our neural net function looks like the following:

```
function NN({
nodeGenes,
connectionGenes
}) {
let storage = ...;
return {
feedForward(input) {
...
},
mutateWeights() {
...
},
addConnection() {
...
},
addNode() {
...
},
crossover(otherNet) {
...
}
}
}
```

# Evolution

In order for our networks to actually learn to do anything, they have to evolve. For this, we'll create a helper function called `Population`

to manage all the neural nets:

```
function Population({
inputs, // how many inputs the neural net has
outputs, // how many outputs the neural net has
popSize // the amount of neural networks in the population
}) {
}
```

Before we actually get started with making the genetic algorithm, we need to make some of the private data of the neural net publicly available for the GA, via the following getters and setters:

```
get nodeGenes() {
return nodeGenes;
},
set nodeGenes(val) {
nodeGenes = val;
},
get connectionGenes() {
return connectionGenes;
},
set connectionGenes(val) {
connectionGenes = val;
},
get storage() {
return storage;
}
```

Additionally, each neural network will need to have the capacity to keep track of its fitness. We'll accomplish this by creating a local variable in the NN function called `fitness`

and adding the corresponding getters and setters:

```
function NN(...) {
...
let fitness = 0;
return {
...
get fitness() { return fitness; },
set fitness(val) { fitness = val }
...
}
}
```

Now, we can start on the actual GA. First, we must cover the concept of *species* - and how innovation can be protected through speciation.

## Species

Each "species" in NEAT is a group of "similar" neural networks. Networks can be similar in two different ways: differences in their topology, and differences in their weight values. Before we get into how neural networks are sorted into species, let's start by declaring a few initial variables in our `Population`

function:

```
let population = [];
let species = [];
```

The `population`

is a 1d array of all creatures, whereas `species`

is a 2d array - where each array in `species`

represents all the neural networks in one species.

In order to separate the neural networks into species, however, we first need to have some neural networks.

The following bit of code creates a number of empty neural nets equal to the population size:

```
const nodes = []; // create a list of all the neurons
for (let i = 0; i < inputs; i++) {
// add input neurons
nodes.push({
id: i,
type: "input"
})
}
for (let i = 0; i < outputs; i++) {
// add output neurons
nodes.push({
id: i + inputs,
type: "output"
})
}
for (let i = 0; i < popSize; i++) {
// create empty neural net from nodes
const nn = NN({
nodeGenes: [...nodes.map(node => ({
...node
}))],
connectionGenes: []
});
nn.mutate(); // mutate it
population.push(nn) // add it to the population
}
```

Second, we need some sort of function that can take in two different neural nets and numerically represent their topological and synaptic differences.

For this function, we'll measure two things - the average weight difference between two networks:

```
weightDiff(otherNet) {
let diff = 0; // keep track of the weight differences
let matching = 0; // keep track of how many matching connections there are
// for each connection pair
connectionGenes.forEach(gene => {
otherNet.connectionGenes.forEach(gene2 => {
// if a connection matches
if (gene.innov === gene2.innov) {
matching++;
// add weight difference of connections to diff
diff += Math.abs(gene.weight - gene2.weight);
}
})
});
// if no connections match, the networks are as different as can be - so an average difference of 100 is returned
if (matching === 0) {
return 100;
}
return diff / matching;
}
```

The other thing we will measure is the number of excess and disjoint connections between the two networks - in other words, how many connection in each network don't have a matching connection in the other:

```
disjointAndExcess(otherNet) {
// get amount of matching genes
const matching = connectionGenes.filter(({
innov
}) => otherNet.connectionGenes.some(({
innov: i
}) => innov === i)).length;
// use that to compute amount of non-matching genes
return (connectionGenes.length + otherNet.connectionGenes.length - 2 * (matching))
}
```

Then, these two values are combined in the following manner:

```
(excessCoeff * nn.disjointAndExcess(rep)) / (Math.max(rep.connectionGenes.length + nn.connectionGenes.length), 1)) + weightDiffCoeff * nn.weightDiff(rep)
```

Where `nn`

and `rep`

are the neural networks being compared, and `excessCoeff`

and `weightDiffCoeff`

are hyperparameters set at the beginning.

Now that we have a function that can quantify the difference between neural networks, we can determine whether a neural network can become part of a species.

To start, we select a random member of the species in question - a "representative". Then, we use our function to quantify the difference between the neural network and the representative. If the difference is less than a certain threshold, the neural net is incorporated into the representative's species. If not, the next species is checked using this process. If the neural network doesn't fit into any species, a new species is created, with that neural net as its initial member.

Using all of this, we can create a `speciatePopulation`

function that separates a given population into species. Before we can do that, let's add the `excessCoeff`

and `weightDiffCoeff`

, along with the `diffThresh`

(the threshold for including a neural network into a species) hyperparameters to the population function:

```
function Population({
inputs,
outputs,
popSize,
excessCoeff = 1,
weightDiffCoeff = 2,
diffThresh = 1.5
})
```

Now, we can write our `speciatePopulation`

function - inside of the `Population`

function so we can access the `population`

and `species`

variables through closure.

```
function Population(...) {
...
function speciatePopulation() {
// for each neural net
population.forEach(nn => {
let speciesFound = false;
// for each species
species.forEach(s => {
// if there are neural nets in the species
if (s.length !== 0) {
// and the neural net has not already been placed in a species
if (!speciesFound) {
// choose random member of species to be the "representative"
const rep = s[Math.floor(Math.random() * s.length)];
// calculate the difference between the two neural nets
const diff = ((excessCoeff * nn.disjointAndExcess(rep)) / (Math.max(rep.connectionGenes.length + nn.connectionGenes.length), 1)) + weightDiffCoeff * nn.weightDiff(rep);
// if the difference is less than the threshold
if (diff < diffThresh) {
// add the neural net to the species
s.push(nn);
// a species has been found
speciesFound = true;
}
}
}
})
// if net didn't fit into any species
if (!speciesFound) {
// create a new species with the net as its sole member
const newSpecies = [nn];
// add the new species to the list of all species
species.push(newSpecies);
}
})
}
}
```

But... what is the point of speciating the population in the first place? Well, speciation protects innovation - here's how:

- Neural Networks compete and reproduce
*within*their own species. - If a new node or connection is initially added to a neural network, it may initially make the network perform worse - meaning it will be selected out of the population, and the addition is lost.
- But if a new innovation (the addition of a connection or node) leads a neural network to be separated into its own species, then the neural network has a chance to survive and optimize the new addition, potentially improving the population as a whole.

This encourages NEAT as a whole to pursue many different solutions and find its way out of local optima.

## Generations

So, we know how neural networks can mutate and reproduce, and how speciation works and why it helps. Now, let's combine all this knowledge and write a function that performs a *generation*, or a step of learning. This function will create a new population of neural networks from the fittest of the last, reward species that did well, and penalize those that did poorly.

Before we can jump into the `doGeneration`

function, we need to talk about something called *explicit fitness sharing*.

### Explicit Fitness Sharing

Explicit fitness sharing is a method of determining how many offspring a given species should have, based off the fitness of the species. It is best explained through example.

Let's say we have a population of ten nets, and two species.

Species 1 has 8 nets.

Species 2 has 2 nets.

The following arrays represent each of the net's fitnesses:

Species 1: [3, 5, 1, 6, 2, 4, 1, 1]

Species 2: [8, 6]

In the next generation, the amount of offspring species 1 and species 2 will have is based off their fitness performance.

The average fitness of the general population is 3.7.

Species 1's average fitness is 2.875.

Species 2's average fitness is 7.

Species 1's average fitness divided the general population average fitness is about 2.875/3.7 = 0.78.

Species 2's average fitness divided the general population average fitness is about 7/3.7 = 1.89.

So the amount of offspring species 1 has is equal to the ceiling of its length (8) times 0.78, or 7.

And the amount of offspring species 1 has is equal to the ceiling of its length (2) times 1.89, or 4.

Since the total amount of offspring is now bigger than 10, we prune one offspring away from species 2, leaving Species 1 with 7 offspring and Species 2 with 3.

So, to summarize, a species offspring is equal to the ceiling of `species.length * species.avgFitness / population.avgFitness`

.

### Mutation

Additionally, let's add a function called `mutate`

to the NN class in order to take all three mutations and sum them up into one single function:

```
mutate() {
// 80% chance to mutate weights
if (Math.random() < 0.8) {
this.mutateWeights();
}
// 5% chance to add connection
if (Math.random() < 0.05) {
this.addConnection();
}
// 1% chance to add node
if (Math.random() < 0.01) {
this.addNode();
}
}
```

These chances can be calibrated to best fit your problem.

### Helper Functions

A simple avgFitness function can be created for the `Population`

class:

```
avgFitness() {
return population.map(nn => nn.fitness).reduce((t, v) => t + v, 0) / population.length;
}
```

Additionally, some setters and getters are necessary for the client to interact with the `Population`

class:

```
get population() {
return population;
},
get species() {
return species;
},
get popSize() {
return popSize;
},
setFitness(i, fitness) {
if (population[i]) {
population[i].fitness = fitness;
}
},
netAt(i) {
return population[i];
}
```

### Parent Selection

The final step before creating the `doGeneration`

function is making a `chooseParent`

function, which will take in a species and return one of its members. A good `chooseParent`

function returns a random-ish member of the species, but is heavily weighted towards choosing highly fit members.

An algorithm to accomplish this is known as roulette wheel selection:

```
chooseParent(s) { // s is a species array
// set a threshold equal to a random number between 0 and the sum of all the member's of s fitness's.
let threshold = Math.random() * s.map(nn => nn.fitness).reduce((t, v) => t + v);
// create a counter starting at 0
let sum = 0;
// for each species member
return s.find((p, i) => {
// increment counter by member's fitness
sum += p.fitness;
// if counter is bigger than threshold, then return that member of the species.
if (sum > threshold) {
return true;
}
});
}
```

There are other methods, and you can check them out here.

###
The `doGeneration`

function

After all this time, we finally have all the tools to implement the `doGeneration`

function (as a method of the Population class) - the backbone of the entire learning algorithm:

```
doGeneration() {
const popFitness = this.avgFitness(); // get average fitness
population = []; // clear population
// how many individuals that need to be created are left?
let amtLeft = popSize;
species.forEach(s => { // for each of the species (while the population has been cleared, the species haven't)
// use explicit fitness sharing to figure out how many new offspring a species should get
let newIndividualsCount = Math.ceil((s.map(nn => nn.fitness / s.length).reduce((t, v) => t + v, 0) / popFitness) * s.length);
// deduct that amount from amtLeft
amtLeft -= newIndividualsCount;
// if too many individuals have been created, reduce newIndividualsCount to be within the constraints of the population size
if (amtLeft < 0) {
newIndividualsCount += amtLeft;
amtLeft = 0;
}
// list of offspring
let newPeeps = [];
// for each new individual
for (let i = 0; i < newIndividualsCount; i++) {
// choose a two parents from the species
const parent1 = this.chooseParent(s);
const parent2 = this.chooseParent(s);
let baby; // the new neural net
// have the fitter parent crossover with the less fit parent
if (parent1.fitness > parent2.fitness) {
baby = parent1.crossover(parent2);
} else {
baby = parent2.crossover(parent1);
}
// mutate the baby's brain (don't take this out of context)
baby.mutate();
// add the baby to the new members
newPeeps.push(baby);
}
// add the offspring to the general population
population.push(...newPeeps);
});
// mark all of the old population as vestigial
species.forEach(s => {
s.forEach(nn => {
nn.vestigial = true;
})
})
// remove all dead species
species.forEach((s, i) => {
if (s.length === 0) {
species.splice(i, 1);
}
})
speciatePopulation(); // separate the new population into species
// get rid of vestigial nets
species = species.map(s => s.filter(x => !x.vestigial))
// remove all dead species (again)
species.forEach((s, i) => {
if (s.length === 0) {
species.splice(i, 1);
}
})
}
```

And that's it! Now that all of that is done, the outline of the Population function should look something like this:

```
function Population({
inputs,
outputs,
popSize,
excessCoeff = 1,
weightDiffCoeff = 2,
diffThresh = 1.5,
}) {
let population = [];
const nodes = [];
let species = [];
function speciatePopulation() {
...
}
for (let i = 0; i < inputs; i++) {
nodes.push({ id: i, type: "input" })
}
for (let i = 0; i < outputs; i++) {
nodes.push({ id: i + inputs, type: "output" })
}
for (let i = 0; i < popSize; i++) {
const nn = NN({
nodeGenes: [...nodes.map(node => ({...node }))],
connectionGenes: []
});
for (let i = 0; i < Math.floor(neatRandom(initialConnectionsMin, initialConnectionsMax)); i++) {
nn.addConnection();
}
nn.mutate();
population.push(nn)
}
speciatePopulation();
return {
get population() {
return population;
},
get species() {
return species;
},
get popSize() {
return popSize;
},
setFitness(i, fitness) {
if (population[i]) {
population[i].fitness = fitness;
}
},
netAt(i) {
return population[i];
},
doGeneration() {
...
},
chooseParent(s) {
...
},
avgFitness() {
return population.map(nn => nn.fitness).reduce((t, v) => t + v, 0) / population.length;
}
}
}
```

# Example

So, how could these functions be used to actually solve a problem? Here's a little piece of code I wrote to try to get this implementation of NEAT to solve XOR (it didn't, but it did improve its fitness over time):

```
function xorfitness(net) {
let fitness = 0;
fitness += 1 - net.feedForward([0, 0, 1])[0];
fitness += net.feedForward([1, 0, 1])[0];
fitness += net.feedForward([0, 1, 1])[0];
fitness += 1 - net.feedForward([1, 1, 1])[0];
return Math.max((fitness * 100 - 200), 1) ** 2;
}
// create a population with 3 inputs (num 1, num2, and bias) and 1 output (the result of xor)
const pop = Population({
inputs: 3,
outputs: 1,
popSize: 128
})
for(let i = 0; i < 300; i++) { // do 300 generations
pop.population.forEach(net => { // for each net
net.fitness = xorfitness(net); // calculate net fitness
})
// conduct generation based off fitness scores
pop.doGeneration();
}
// calculate fitness of end generation
pop.population.forEach(net => {
net.fitness = xorfitness(net);
})
const champ = pop.population.sort((a, b) => b.fitness - a.fitness)[0]; // find the champion
// See how the champion does on approximating XOR (it won't succeed)
console.log(champ.feedForward([0, 0, 1])[0]) // 0.5055776837087795
console.log(champ.feedForward([1, 0, 1])[0]) // 0.8682121626427614
console.log(champ.feedForward([0, 1, 1])[0]) // 0.8355539727852697
console.log(champ.feedForward([1, 1, 1])[0]) // 0.9654170839476316
```

# Conclusions

Though my implementation of NEAT failed to solve XOR, it did solve the problem of getting an AI to walk. You can see the source code for my use of NEAT here. This is the actual Walking AI.

Since my implementation of NEAT seems to work functionally, I presume my parameters or selection algorithm must be flawed in some way. If anyone has any recommendations about how to improve my code, or any optimizations to make, feel free to leave them in the comments!

I hope my article helped you learn about NEAT, and thank you for reading!

## Discussion (4)

Wow, this is so interesting! Makes me want to use NEAT for

mynext project! I will definitely give you stars on GitHub ðŸ˜ƒThanks!

Awesome video and post! I'd recommend K get another mic or find a quieter place to record though :)

Thank you! We'll definitely work on our audio quality!