<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lanskoy Kirill</title>
    <description>The latest articles on DEV Community by Lanskoy Kirill (@lanskoyk).</description>
    <link>https://dev.to/lanskoyk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lanskoyk"/>
    <language>en</language>
    <item>
      <title>I want to share and gain knowledge in the IT field, where? Comparison of the main places for blogs, articles, videos about IT</title>
      <dc:creator>Lanskoy Kirill</dc:creator>
      <pubDate>Fri, 17 Jan 2025 15:49:18 +0000</pubDate>
      <link>https://dev.to/lanskoyk/i-want-to-share-and-gain-knowledge-in-the-it-field-where-comparison-of-the-main-places-for-blogs-6bj</link>
      <guid>https://dev.to/lanskoyk/i-want-to-share-and-gain-knowledge-in-the-it-field-where-comparison-of-the-main-places-for-blogs-6bj</guid>
      <description>&lt;p&gt;When you come to the programming sphere, the first thing you are interested in is: "getting started with Unity", "how to take your first steps in C++", then you come back, wanting to help the rest of the IT community with what you have learned as a developer ("Creating a NEAT algorithm for Unity", "What is UART and how does it work". All these topics have one thing in common: they do not relate to the topic of the mastodon StackOverflow, as this is an opinion, information that quickly becomes outdated and is too vague. In order not to keep in your head a bunch of personal blogs-sites that you trust, you transfer this process of memory and search to a source that, at the expense of you and others like you, helps good authors and dumps bad ones, this is how Hacker News works. And it also removes the requirement to deal with the CMS and pay for hosting, and firstly, not everyone is ready to pay, and secondly, if a person does not pay for hosting, then we simply lose useful information from this site.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Let's start with Medium. *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2j2elxew6urej2bbsqt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2j2elxew6urej2bbsqt.gif" alt="Image description" width="1500" height="1000"&gt;&lt;/a&gt;&lt;br&gt;
It appeared in 2012 and is in the top 100 most popular sites. Initially a good resource from which many other resources began their growth, for example, Towards Data Science, FreeCodeCamp, etc. In fact, Medium is not a platform for IT, but a community, where there are also programming groups. I don't know, there was information about problems in relations with FreeCodeCamp and Hackernoon before. Probably, for me the main disadvantage is the lack of dislikes. Promotion is solved through communities, like the same tds, and paid articles are usually not accepted in the community, which helps to avoid these unpleasant things. It is difficult to say exactly how many programmers there are on Medium, since there are no such statistics, but I think based on the number of subscribers in the communities, there are definitely about 5-8 million. And the problem is the claps: You see 37 claps, this is the only place where you don’t understand whether 37 people liked it or one fake person pressed the button 37 times?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Habr&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1xrow9cxsmj7wy23ed6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1xrow9cxsmj7wy23ed6.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
I am a native of the CIS and this involved me in one of the first communities in this regard - Habr (appeared in 2006, and in general it is more of an ecosystem with Q&amp;amp;A, a separate site for freelancing, which is interconnected from promoted to the top). "Write an article on Habr" has become a stable expression among programmers. However, one of its decisions made it rapidly developing and deprived it of the opportunity to become popular in America and Europe: it unites 90% of all IT specialists of post-Soviet countries, but the main language is Russian because of this. They recently launched Habr on English, but I even asked them why they refused to make an automatic translation like GeeksForGeeks, the answer was: "Before implementing the English version, we considered such a solution and considered it inappropriate." In general, for me, its principles are some of the most scalable and friendly to new authors: your first article is checked by a moderator who really wants to skip your article (yes, yes, I saw this myself, I can give you some examples in the comments), but stories like the five best functions will not be skipped, however, even this check only works for the first article, then the community decides and if you lied, your article is advertising, etc., then it will simply go into the minus, and if there are a lot of them, you will again go to Read-Only. By the way, the problem of trolls and bot farms is solved by the fact that in order to vote (not comment) you need to write an article again. This is probably the largest community right now with 11-13 million users, but it is known so much in the CIS as it is unknown in the rest of the world, which is very sad, because for me it is the ability to vote down (by the way, it is very rare when an article is not accepted) and the invite technique, which does not clutter the main feed, which makes it incredibly easy to promote new authors, and also that almost any article will start a discussion of the information from this article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dev community and hackernoon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6lcdyrjyz7vnlofwiqs.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6lcdyrjyz7vnlofwiqs.jpeg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
Good resources with friendly communities. With about 7 and 3.5 million people, they store a lot of interesting articles. They post articles on their site. In my opinion, Google creates all sorts of problems for them, since they are rarely indexed for me, so I have to search on the sites themselves, but reactions give a good understanding of the article, I would like to have the ability to vote down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hacker News&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft92757fxp983nj3o8le3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft92757fxp983nj3o8le3.png" alt="Image description" width="310" height="163"&gt;&lt;/a&gt;&lt;br&gt;
The second of the known resources on this topic by the time of appearance (2007). The only site on this list, where the creators clearly considered "The house is a fine house when good folks are within", the content itself is simply wonderful thanks to its long existence, but this is the place where you leave a link to your site, a page in Medium, etc. For me, the experience of working with it rested on UX, because well ... it really looks like a site from the late 1990s, again there is no downvote. However, even my articles received mostly views on Dev.to and Medium were most likely read thanks to HN, as well as r/programming (more on that later). There is no classification by tags, which very greatly limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71q232myk0ub0igles0t.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71q232myk0ub0igles0t.jpeg" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
Or rather, some subreddits: r/programming, r/MachieneLearning, etc. They are somewhat reminiscent of Medium, but with a larger reach due to a banal larger number of people, for me reddit has become a very pleasant resource in terms of sharing experience, knowledge. I just don’t understand why, for example, r/MachieneLearning has only 2.9 million people, in theory there are incomparably more specialists and beginners in ML. However, you can’t type “What is smart pointer in C++ reddit?” in the search bar and expect a normal answer, this is again a place where you have a personal blog and you throw articles from it or, for example, from arXiv. There is an opportunity to put a minus to a bad author. My personal experience was not very good: posting any link in reddit, the chance of getting shadowban, suspended on an account flies up sharply, and there is nowhere without links, so I have been writing in appeals for 1.5 months :(.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YouTube&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa32z7jnb24ksaeyiiib7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa32z7jnb24ksaeyiiib7.jpeg" alt="Image description" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;
I think that many people forget about it, although in fact it is the largest place where information on various topics in programming is easily accessible. It is the chance to find information there that is the greatest. However, in terms of channel promotion, it is Medium without communities. Although YouTube has the largest amount of information, it is also frustrating that the author of the channel has the ability to delete comments, which makes the desire to point out errors sometimes meaningless, depriving others of reasonable criticism if the author wants it. And you can’t answer the involuntary question: did the author explain everything so well or did he delete the wishes?&lt;/p&gt;

&lt;p&gt;Personal experience of writing articles:&lt;br&gt;
Medium, Habr, Dev.to&lt;br&gt;
On medium I get three views, on dev.to I got 27, on Habr 6.7k by sending an article, even when most of the Habr audience is sleeping, comments with questions and improvements.&lt;/p&gt;

&lt;p&gt;Oh, you've almost reached the end! For me, among all the examples, Habr remained the most comfortable. BUT this is just my opinion, perhaps, although I tried, I missed something, overlooked something, or you disagree. If so, then write in the comments, no one will delete them:)&lt;/p&gt;

</description>
      <category>programming</category>
      <category>community</category>
    </item>
    <item>
      <title>Creating a genetic algorithm for a neural network and a neural network for graphic games and video games using Python and NumPy</title>
      <dc:creator>Lanskoy Kirill</dc:creator>
      <pubDate>Fri, 13 Dec 2024 20:24:38 +0000</pubDate>
      <link>https://dev.to/lanskoyk/creating-a-genetic-algorithm-for-a-neural-network-and-a-neural-network-for-graphic-games-and-video-3i3o</link>
      <guid>https://dev.to/lanskoyk/creating-a-genetic-algorithm-for-a-neural-network-and-a-neural-network-for-graphic-games-and-video-3i3o</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmatvan9io7r65j6q30c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmatvan9io7r65j6q30c.png" alt="Image description" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today I will tell and show how to make a Genetic Algorithm (GA) for a neural network so that it can play different games with it. I tried it on the game Pong and Flappy bird. It showed itself very well. I advise you to read it if you haven’t read the first article: “Creating a simple and efficient genetic algorithm for a neural network with Python and NumPy”, since I have modified my code that was shown in that article.&lt;/p&gt;

&lt;p&gt;I divided the code into two scripts, in one the neural network plays a game, in the other it learns and makes decisions (the genetic algorithm itself). The code with the game is a function that returns a fitness function (it is needed to sort neural networks, for example, how long it lasted, how many points it earned, etc.). Therefore, the code with the games (there are two of them) will be at the end of the article. The genetic algorithm for the neural network for the game Pong and the game Flappy Bird differ only in parameters. Using the script I wrote and described in the previous article, I created a heavily modified genetic algorithm code for the game Pong, which I will describe most of all, since it was what I relied on when I created the GA for Flappy Bird.&lt;/p&gt;

&lt;p&gt;First, we need to import modules, lists, and variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
import random
import ANNPong as anp
import pygame as pg
import sys
from pygame.locals import *
pg.init()
listNet = {}
NewNet = []
goodNet = []
timeNN = 0
moveRight = False
moveLeft = False
epoch = 0
mainClock = pg.time.Clock()
WINDOWWIDTH = 800
WINDOWHEIGHT = 500
windowSurface = pg.display.set_mode((WINDOWWIDTH, WINDOWHEIGHT), 0, 32)
pg.display.set_caption('ANN Pong')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;AnnPong is a script with a game&lt;br&gt;
listNet, NewNet, goodNet - lists of neural networks (we'll go into more detail later)&lt;br&gt;
timeNN - fitness function&lt;br&gt;
MoveRight, moveLeft - select neural network where to move&lt;br&gt;
epoch - epoch counter&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def sigmoid(x):
    return 1/(1 + np.exp(-x))

class Network():
    def __init__(self):
        self.H1 = np.random.randn(6, 12)
        self.H2 = np.random.randn(12, 6)
        self.O1 = np.random.randn(6, 3)
        self.BH1 = np.random.randn(12)
        self.BH2 = np.random.randn(6)
        self.BO1 = np.random.randn(3)
        self.epoch = 0

    def predict(self, x, first, second):
        nas = x @ self.H1 + self.BH1
        nas = sigmoid(nas)
        nas = nas @ self.H2 + self.BH2
        nas = sigmoid(nas)
        nas = nas @ self.O1  + self.BO1
        nas = sigmoid(nas)
        if nas[0] &amp;gt; nas[1] and nas[0] &amp;gt; nas[2]:
            first = True
            second = False
            return first, second
        elif nas[1] &amp;gt; nas[0] and nas[1] &amp;gt; nas[2]:
            first = False
            second = True
            return first, second
        elif nas[2] &amp;gt; nas[0] and nas[2] &amp;gt; nas[1]:
            first = False
            second = False
            return first, second
        else:
            first = False
            second = False
            return first, second
        def epoch(self, a):
            return 0


class Network1():
    def __init__(self, H1, H2, O1, BH1, BH2, BO1, ep):
        self.H1 = H1
        self.H2 = H2
        self.O1 = O1
        self.BH1 = BH1
        self.BH2 = BH2
        self.BO1 = BO1
        self.epoch = ep

    def predict(self, x, first, second):
        nas = x @ self.H1 + self.BH1
        nas = sigmoid(nas)
        nas = nas @ self.H2 + self.BH2
        nas = sigmoid(nas)
        nas = nas @ self.O1 + self.BO1
        nas = sigmoid(nas)
        if nas[0] &amp;gt; nas[1] and nas[0] &amp;gt; nas[2]:
            first = True
            second = False
            return first, second
        elif nas[1] &amp;gt; nas[0] and nas[1] &amp;gt; nas[2]:
            first = False
            second = True
            return first, second
        elif nas[2] &amp;gt; nas[0] and nas[2] &amp;gt; nas[1]:
            first = False
            second = False
            return first, second
        else:
            first = False
            second = False
            return first, second
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The sigmoid is used as the activation function.&lt;br&gt;
In the Network class we define the parameters of the neural network, and in the predict function it tells us where to move in the game. (nas is short for Network answer), the epoch function returns the era of appearance of this AI for generation zero, since a separate variable is set for this in the Network1() class.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for s in range (1000):
    s = Network()
    timeNN = anp.NNPong(s)
    listNet.update({
        s : timeNN
    })

listNet = dict(sorted(listNet.items(), key=lambda item: item[1]))
NewNet = listNet.keys()
goodNet = list(NewNet)
NewNet = goodNet[:10]
listNet = {}
goodNet = NewNet
anp.NPong(NewNet[0])
print(str(epoch) + " epoch")
print(NewNet[0].epoch)
print('next')
anp.NPong(NewNet[1])
print(NewNet[1].epoch)
print('next')
anp.NPong(NewNet[2])
print(NewNet[2].epoch)
print('next')
anp.NPong(NewNet[3])
print(NewNet[3].epoch)
print('next')
anp.NPong(NewNet[4])
print(NewNet[4].epoch)
print('next')
anp.NPong(NewNet[5])
print(NewNet[5].epoch)
print('next')
anp.NPong(NewNet[6])
print(NewNet[6].epoch)
print('next')
anp.NPong(NewNet[7])
print(NewNet[7].epoch)
print('next')
anp.NPong(NewNet[8])
print(NewNet[8].epoch)
print('next')
anp.NPong(NewNet[9])
print(NewNet[9].epoch)
print('that is all')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we run neural networks with randomly created weights and select the 10 worst ones from them, so that the genetic algorithm takes on all the work of raising them))) and shows them.&lt;br&gt;
More details:&lt;br&gt;
The fitness function returned from the game code is written to timeNN, then we add the AI ​​and its timeNN value to the listNet. After the cycle, we sort the list, write the neural networks from listNet into NewNet, then we form a list and leave only ten.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for g in range(990):
    parent1 = random.choice(NewNet)
    parent2 = random.choice(NewNet)
    ch1H = np.vstack((parent1.H1[:3], parent2.H1[3:])) * random.uniform(-2, 2)
    ch2H = np.vstack((parent1.H2[:6], parent2.H2[6:])) * random.uniform(-2, 2)
    ch1O = np.vstack((parent1. O1[:3], parent2. O1[3:])) * random.uniform(-2, 2)
    chB1 = parent1.BH1 * random.uniform(-2, 2)
    chB2 = parent2.BH2 * random.uniform(-2, 2)
    chB3 = parent2.BO1 * random.uniform(-2, 2)
    g = Network1(ch1H, ch2H, ch1O, chB1, chB2, chB3, 1)
    goodNet.append(g)
NewNet = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here crossing and mutation occur. (Such points were described in more detail in the first article)&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while True:
    epoch += 1
    print(str(epoch) + " epoch")
    for s in goodNet:
        timeNN = anp.NNPong(s)
        listNet.update({
            s : timeNN
        })
    goodNet =[]
    listNet = dict(sorted(listNet.items(), key=lambda item: item[1], reverse=True))
    goodNet = list(listNet.keys())
    NewNet.append(goodNet[0])
    goodNet = list(listNet.values())
    for i in listNet:
        a = goodNet[0]
        if listNet.get(i) == a:
            NewNet.append(i)
    goodNet = list(NewNet)
    listNet = {}
    try:
        print(NewNet[0].epoch)
        anp.NPong(NewNet[0])
        print('next')
        print(NewNet[1].epoch)
        anp.NPong(NewNet[1])
        print('next')
        print(NewNet[2].epoch)
        anp.NPong(NewNet[2])
        print('next')
        print(NewNet[3].epoch)
        anp.NPong(NewNet[3])
        print('next')
        print(NewNet[4].epoch)
        anp.NPong(NewNet[4])
        print('next')

        print(NewNet[5].epoch)
        anp.NPong(NewNet[5])
        print('next')
        print(NewNet[6].epoch)
        anp.NPong(NewNet[6])
        print('next')
        print(NewNet[7].epoch)
        anp.NPong(NewNet[7])
        print('next')
    except IndexError:
        print('that is all')

    for g in range(1000 - len(NewNet)):
        parent1 = random.choice(NewNet)
        parent2 = random.choice(NewNet)
        ch1H = np.vstack((parent1.H1[:3], parent2.H1[3:])) * random.uniform(-2, 2)
        ch2H = np.vstack((parent1.H2[:6], parent2.H2[6:])) * random.uniform(-2, 2)
        ch1O = np.vstack((parent1. O1[:3], parent2. O1[3:])) * random.uniform(-2, 2)
        chB1 = parent1.BH1 * random.uniform(-2, 2)
        chB2 = parent2.BH2 * random.uniform(-2, 2)
        chB3 = parent2.BO1 * random.uniform(-2, 2)
        g = Network1(ch1H, ch2H, ch1O, chB1, chB2, chB3, epoch)
        goodNet.append(g)
    print(len(NewNet))
    print(len(goodNet))
    NewNet = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here we are already repeating ourselves, so I will only explain what has not been said before:&lt;br&gt;
Here we take the first one on the list, that is, one of the best in the era, and compare its results with the rest, since very often there are several AIs that have achieved the same success. And these equal leaders will participate in mutations, we use the try method, since there may be less than 10 best in this era. And we also throw these neural networks into the next era without changes, since the descendants may be worse than their ancestors, that is, so that they do not degrade.&lt;br&gt;
This is all according to the first code!&lt;br&gt;
Let's move on to the game code. Here I will only explain what concerns AI training (I will post a link to the disk).&lt;br&gt;
In the game Pong, the neural network played twice: the first time the ball bounces to the left, the second time - to the right&lt;br&gt;
*whGo is a variable in the code (short for "where to go")&lt;br&gt;
We return the time as a fitness function. The game has two almost identical functions, but in the second one we show everything on the screen, this is necessary so that we can see the progress after each era and when the neural network has completed the game, we determine this if it lasted more than 8000 updates in the first one.&lt;br&gt;
After months of work and improvements, I managed to create a learning algorithm for the Pong game, but to be sure, I decided to test the AI ​​not on my game, but on one created by another person (test for omnivorousness)))), I chose the Flappy Bird game on pygame from this video: &lt;a href="https://youtu.be/7IqrZb0Sotw?feature=shared" rel="noopener noreferrer"&gt;https://youtu.be/7IqrZb0Sotw?feature=shared&lt;/a&gt;&lt;br&gt;
Having slightly changed the game for the neural network, for example, I added variables for the distance from the bird to the pipe. There are 3 by 3, since we need to know the height of each pipe (y) and the distance by x, and there were no more than three pairs of pipes on the screen, so there are three by three (nine in total). Also after the collision the function was restarted and the third parameter, which is called rep of the function, was passed what kind of restart it was, if it was equal to three, then the game returned the fitness function to the Genetic Algorithm, and if it was zero, then we assign the value 0 to the time variable. Also, I did not write two very similar functions, but simply checked if the checkNN variable is True, then the screen needs to be updated. I also modified the training code&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while True:
    for event in pg.event.get():
        if event.type == KEYDOWN:
            if event.key == K_1:
                showNN = True
    epoch += 1
    print(str(epoch) + " epoch")
    if epoch &amp;lt; 10:
        for s in goodNet:
            timeNN = anp.NPong(s, False, 0, 0)
            listNet.update({
                s : timeNN
            })
    if epoch &amp;gt;= 10:
        for s in goodNet:
            timeNN = anp.NPong(s, False, 0, 1)
            listNet.update({
                s : timeNN
            })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After the tenth epoch, due to the last parameter, which we change to one (in the game code I called this parameter varRe from the words variant of return), the game returns not the time, but the number of pipes before the collision (this way the neural network learns better)&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;howALot = 1000 - len(NewNet)
    if howALot &amp;lt; 40:
        howALot = 40
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;These three lines of code are needed if in the previous era of AI there were very, very many with the same result and the algorithm may stop learning, since it will have nothing to learn :-).&lt;/p&gt;

&lt;p&gt;Afterwards I updated and accelerated my GA for FlappyBird, now all birds are launched simultaneously, so training accelerated from ~3-5 hours to 5-10 minutes when launched on CPU, that is, 50 times! How it works - I suggest you see for yourself: a small useful repetition of what has been covered!&lt;/p&gt;

&lt;p&gt;That's all, if you have any questions, write in the comments, bye!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There is still a lot of new things ahead of us, this is the basis for the next ones: now I am working on the implementation of full AI with the help of evolutionary algorithms in an artificial environment, it will be interesting!&lt;/strong&gt;&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://www.youtube.com/watch?feature=shared%25%7D&amp;amp;v=hUtOGad6vTU" rel="noopener noreferrer"&gt;
      youtube.com
    &lt;/a&gt;
&lt;/div&gt;




&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://www.youtube.com/watch?feature=shared&amp;amp;v=G7KhgcIZPpU" rel="noopener noreferrer"&gt;
      youtube.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;Codes: &lt;a href="https://github.com/LanskoyKirill/GenNumPy.git" rel="noopener noreferrer"&gt;https://github.com/LanskoyKirill/GenNumPy.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On my site: &lt;a href="https://selfrobotics.space/2024/12/13/creating-a-genetic-algorithm-for-a-neural-network-and-a-neural-network-for-graphic-games-and-video-games-using-python-and-numpy/" rel="noopener noreferrer"&gt;https://selfrobotics.space/2024/12/13/creating-a-genetic-algorithm-for-a-neural-network-and-a-neural-network-for-graphic-games-and-video-games-using-python-and-numpy/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>Creating a simple and efficient genetic algorithm for a neural network with Python and NumPy</title>
      <dc:creator>Lanskoy Kirill</dc:creator>
      <pubDate>Tue, 10 Dec 2024 22:18:51 +0000</pubDate>
      <link>https://dev.to/lanskoyk/creating-a-simple-and-efficient-genetic-algorithm-for-a-neural-network-with-python-and-numpy-2f98</link>
      <guid>https://dev.to/lanskoyk/creating-a-simple-and-efficient-genetic-algorithm-for-a-neural-network-with-python-and-numpy-2f98</guid>
      <description>&lt;p&gt;It is the first article from course about evolution algorithms in ML.&lt;/p&gt;

&lt;p&gt;A genetic algorithm is needed when you know the parameters of your neural network, but do not know what the output should be, for example, this algorithm can be used to play Google Dinosaur or Flappy Bird, because there you do not know what the output should be, but you have the ability to sort the most viable options, for example by time, this is called fitness functions.&lt;/p&gt;

&lt;p&gt;I have never been able to find such an algorithm that would work, be simple, and be usable, so I started creating my own lightweight, simple, perfectly working Genetic Algorithm.&lt;/p&gt;

&lt;p&gt;My goal is not to drag out the writing of this article, and to torture readers with its length, so let’s get straight to the code. As already mentioned, the code is simple, so most of it does not need to be described in entire essays.&lt;/p&gt;

&lt;p&gt;First we need to import the modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
import random
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we add Dataset and the answers to it, but not to use the backpropagation algorithm, but simply to count the number of correct answers. Then you can test it on other variants, which are now commented out&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x = np.array([[1, 1, 0], [0, 0, 1], [1, 0, 1], [0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0], [0, 1, 1], [1, 1, 1]])
y = np.array([[0],[1],[1], [0], [0], [0], [0], [1], [1]])

#x = np.array([[0, 1, 1], [0, 0, 1], [1, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [0, 0, 0], [1, 1, 0], [1, 1, 1]])
#y = np.array([[1],[0], [0], [1], [0], [1], [0], [1], [1]])

#x = np.array([[1, 1, 0], [0, 0, 1], [1, 0, 1], [0, 1, 0], [1, 0, 0], [0, 0, 0], [1, 1, 0], [0, 1, 1], [1, 1, 1]])
#y = np.array([[1],[0],[1], [0], [1], [0], [1], [0], [1]])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add lists and activation functions. The meaning of the lists will become clear later. The first activation function is the sigmoid, and the second is the threshold.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listNet = []
NewNet = []
goodNET = []
GoodNet0 = []
GoodNet1 = []
GoodNet2 = []
GoodNet3 = []
GoodNet4 = []
GoodNet5 = []
GoodNet6 = []
good = 0
epoch = 0

good = 0
epoch = 0

def sigmoid(x):
    return 1/(1 + np.exp(-x)) 
def finfunc(x):
    if x[0] &amp;gt;= 0.5:
        x[0] = 1
        return x[0]

    else:
        x[0] = 0
        return x[0]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will need to create two classes, the first one is needed to create the initial population, and the second one for all subsequent ones, since the first time we will need to randomly create weights, and then only cross and mutate them. The init() function is used to create or add weights, predict() is needed for the algorithm itself and for calculating the best options, and the Fredict() function is different in that it returns the answer and the fitness function to display numbers on the screen and see the training stages. At the output layer, the sigmoid function is first used to bring the answer closer to one of the options, and only then the threshold function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Network():
    def __init__(self):
        self.H1 = np.random.randn(3, 6)
        self.O1 = np.random.randn(6, 1)

    def predict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1

    def Fpredict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1
        return t2, good
class Network1():
    def __init__(self, H1, O1):
        self.H1 = H1
        self.O1 = O1


    def predict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1
    def Fpredict(self, x, y):
        t1 = x @ self.H1
        t1 = sigmoid(t1)
        t2 = t1 @ self.O1
        t2 = sigmoid(t2)
        t2 = finfunc(t2)
        if t2 == y[0]:
            global good
            good += 1
        return t2, good
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We output the first answers and the variable good, which is the fitness function here, then we reset it for the next neural network, the print 'wait0' (you can write whatever you want here) is necessary so as not to get confused about where the answers of different neural networks begin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s = Network()
print(s.Fpredict(x[0], y[0]))
print(s.Fpredict(x[1], y[1]))
print(s.Fpredict(x[2], y[2]))
print(s.Fpredict(x[3], y[3]))
print("wait0")
good = 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first cycle passes, here and in all subsequent cycles we give only six questions to check how well it will cope with the task, which it has not met, that is, we check it for cramming, and this sometimes happens. And now let's go into more detail: depending on how many answers it answered correctly, we assign it to one of the classes, if a large number are correct, then we must support such a neural network and increase its number, so that with the subsequent mutation there will be more smarter ones, to understand this, you can imagine that for 100 people there is one genius, but it is not enough for everyone, and this means that his genius will fade away in the next generations, this means that either the neural network will learn very slowly, or will not exist at all, to avoid this, we increase the number of neural networks with a large number of correct answers in the cycle. At the end, we empty the main listNet list, assign it new values ​​​​of the GoodNet lists in order from best to worst, make a cut for the 100 best individuals, for the subsequent mutation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for s in range (1000):
    s = Network()
    good = 0
    s.predict(x[0], y[0])
    s.predict(x[1], y[1])
    s.predict(x[2], y[2])
    s.predict(x[3], y[3])
    s.predict(x[4], y[4])
    s.predict(x[5], y[5])
    if good == 6:
        GoodNet6.append(s)
        for r in range(15):
            GoodNet4.append(s)
    elif good == 5:
        GoodNet5.append(s)
        for r in range(10):
            GoodNet4.append(s)
    elif good == 4:
        GoodNet4.append(s)
        for r in range(5):
            GoodNet4.append(s)
    elif good == 3:
        GoodNet3.append(s)
    elif good == 2:
        GoodNet2.append(s)
    elif good == 1:
        GoodNet1.append(s)
    elif good == 0:
        GoodNet0.append(s)
    good = 0
listNet = []
listNet.extend(GoodNet6)
listNet.extend(GoodNet5)
listNet.extend(GoodNet4)
listNet.extend(GoodNet3)
listNet.extend(GoodNet2)
listNet.extend(GoodNet1)
GoodNet1 = []
GoodNet2 = []
GoodNet3 = []
GoodNet4 = []
GoodNet5 = []
GoodNet6 = []
goodNET = listNet[:100]
listNet = goodNET
goodNET = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The crossing and mutation itself: we take one part from the first parent, the second from the second, mutate and we get a child in the NewNet list, so 1000 times.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for g in range(1000):
    parent1 = random.choice(listNet)
    parent2 = random.choice(listNet)
    ch1H = np.vstack((parent1.H1[:1], parent2.H1[1:])) * random.uniform(-0.2, 0.2)
    ch1O = parent1.O1 * random.uniform(-0.2, 0.2)
    g = Network1(ch1H, ch1O)
    NewNet.append(g)
listNet = NewNet
NewNet = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting from the previous part of the code, we use Network1(), since we are now crossing and mutating, but not creating randomly. So we need to repeat 1000 times (this is a hyperparameter, so you can choose the number of epochs yourself, 15 was enough for me), we show the answers on the first epoch and the 1000th is the final version (if you have, for example, 20, then specify 20). Here the code is repeated, so I will not describe it, everything is very clear there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in range(1000):
    good = 0
    epoch += 1
    for s in listNet:
      good = 0
      s.predict(x[0], y[0])
      s.predict(x[1], y[1])
      s.predict(x[2], y[2])
      s.predict(x[3], y[3])
      s.predict(x[4], y[4])
      s.predict(x[5], y[5])
      if good == 6:
          GoodNet6.append(s)
          for r in range(15):
              GoodNet4.append(s)
      elif good == 5:
          GoodNet5.append(s)
          for r in range(10):
              GoodNet4.append(s)
      elif good == 4:
          GoodNet4.append(s)
          for r in range(5):
              GoodNet4.append(s)
      elif good == 3:
          GoodNet3.append(s)
      elif good == 2:
          GoodNet2.append(s)
      elif good == 1:
          GoodNet1.append(s)
      elif good == 0:
          GoodNet0.append(s)
      good = 0
    listNet = []
    listNet.extend(GoodNet6)
    listNet.extend(GoodNet5)
    listNet.extend(GoodNet4)
    listNet.extend(GoodNet3)
    listNet.extend(GoodNet2)
    listNet.extend(GoodNet1)
    GoodNet1 = []
    GoodNet2 = []
    GoodNet3 = []
    GoodNet4 = []
    GoodNet5 = []
    GoodNet6 = []
    goodNET = listNet[:100]
    listNet = goodNET
    goodNET = []
    if epoch == 1000:

        print(listNet[0].Fpredict(x[0], y[0]))
        print(listNet[0].Fpredict(x[1], y[1]))
        print(listNet[0].Fpredict(x[2], y[2]))
        print(listNet[0].Fpredict(x[3], y[3]))
        print(listNet[0].Fpredict(x[4], y[4]))
        print(listNet[0].Fpredict(x[5], y[5]))
        print(listNet[0].Fpredict(x[6], y[6]))
        print(listNet[0].Fpredict(x[7], y[7]))
        print(listNet[0].Fpredict(x[8], y[8]))

        good = 0
        print('wait')
    elif epoch == 1:

        good = 0
        print(listNet[0].Fpredict(x[0], y[0]))
        print(listNet[0].Fpredict(x[1], y[1]))
        print(listNet[0].Fpredict(x[2], y[2]))
        print(listNet[0].Fpredict(x[3], y[3]))
        print('wait1')
    for g in range(1000):
        parent1 = random.choice(listNet)

        parent2 = random.choice(listNet)
        ch1H = np.vstack((parent1.H1[:1], parent2.H1[1:])) * random.uniform(-2, 2)
        ch1O = parent1.O1 * random.uniform(2, 2)
        g = Network1(ch1H, ch1O)
        NewNet.append(g)
    listNet = NewNet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's all, the pattern that the neural network should find, this is what number (first, second, third) the final version depends on and ignore the rest. You can do, for example, logical operations (XOR, NOT, AND ...), only in this case in the network class change the input data by two, I also followed the rule neurons in the hidden layer are equal to the input data multiplied by two, it worked, but you can try your options, it is also very important to provide the neural network with the same number of some answers and other answers, so that the number of correct answers, for example "a", would be equal to "b", otherwise the neural network will answer all answers the same way, that is, if there is more a, then it will answer a to everything and nothing will come of it, also give it completely different options in the training sample so that it understands the pattern, for example, if you make an XOR block, then you must add an option with two ones, but in the case of logical operations, you will have to give all the options, because there are too few of them and it will not understand anything. &lt;br&gt;
That's it!!! Next article (must read!): Soon…&lt;br&gt;
Code: &lt;a href="https://github.com/LanskoyKirill/GenNumPy.git" rel="noopener noreferrer"&gt;https://github.com/LanskoyKirill/GenNumPy.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My site(it may be undergoing rework): selfrobotics.space&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
