<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Project-DC</title>
    <description>The latest articles on DEV Community by Project-DC (@projectdc).</description>
    <link>https://dev.to/projectdc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/projectdc"/>
    <language>en</language>
    <item>
      <title>Studying logs using VitaBoard</title>
      <dc:creator>Dhairya Jain</dc:creator>
      <pubDate>Sun, 06 Sep 2020 22:05:14 +0000</pubDate>
      <link>https://dev.to/projectdc/guidelines-about-vitaboard-2m36</link>
      <guid>https://dev.to/projectdc/guidelines-about-vitaboard-2m36</guid>
      <description>&lt;p&gt;This post is the continuation post in a tri-post series about PyGeneses. This post focuses mainly on one of the packages in PyGeneses which is VitaBoard. If you are unfamiliar with PyGeneses and want to know about it, these articles would help you get to speed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/projectdc/introduction-to-pygeneses-26oc"&gt;Introduction to PyGeneses&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/projectdc/getting-started-with-pygeneses-1co2"&gt;Getting started with PyGeneses&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, now let's start with VitaBoard.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is VitaBoard?
&lt;/h2&gt;

&lt;p&gt;Vitaboard provides and advanced, and interactive dashboard to study the agents after their training phase is over. After each agent dies. his/her lifecycle, which contains all the actions he/she has performed, is written in a log file. These log files are used as the input in the VitaBoard which then allows the user to visualize the agent’s life in an easier manner. Vitaboard provides the user with a life visualizer, group statistics, and a genetic history visualizer. It allows the users to identify and understand behaviors shown by the particular agent while the other agents and the environment being the factors affecting them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with VitaBoard
&lt;/h2&gt;

&lt;p&gt;To start using VitaBoard, PyGeneses must be installed in your system. If you have forgotten how that is done or have no knowledge about it, you can find it &lt;a href="https://dev.to/projectdc/getting-started-with-pygeneses-1co2"&gt;here&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
To start the VitaBoard in your system, all you need is PyGeneses and python installed in your system.    &lt;/p&gt;

&lt;h4&gt;
  
  
  Steps to run
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Run the following command in your terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user@programmer~:$ vitaboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open localhost:5000 or 127.0.0.1:5000 in any browser.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Various features of VitaBoard
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;VitaViz
This is the first screen that the user sees when the VitaBoard opens in the browser.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FqGI9nZZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/53b5lxbt1trwf2gz5k9b.jpg" alt="VitaViz" width="880" height="418"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the VitaViz tab. This is used to visualize the lifecycle of the particular agent. An agent’s lifecycle can be visualized by using its log file which is generated when that particular agent dies.&lt;br&gt;
So, to visualize the lifecycle, enter the location of the log file in the first field and set the speed with which you would like to see the lifecycle simulation in the second field (if you enter 1, the simulation speed would be 1 frame/sec).   &lt;/p&gt;

&lt;p&gt;After entering the details and hitting ‘Run’, a pygame window opens which will display the simulation. The window looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ooPCmLLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3al5nxsuzf0u1505fol9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ooPCmLLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3al5nxsuzf0u1505fol9.JPG" alt="Visualizer" width="880" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VitaGroups
This is the second tab, VitaGroups. This is used to form the clusters of agents based on their Neural Network embeddings. This tab looks like this.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W0ipCU2v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k6x2bbdyg1b4gzgthegx.png" alt="VitaGroups" width="880" height="417"&gt;
The input here is again the location of log files. The difference is in VitaViz the location given is of a single log file but in VitaGroups, the location of the entire folder is given which has the log files. Make sure that this folder has an embeddings folder inside it along with other log files. After entering the location, press the ‘Get Groups’ button, this will generate and show the cluster graph. These data points are generated by reducing the dimensions of the agent’s trained embeddings into two dimensions using t-SNE (t-distributed Stochastic Neighbour Embeddings).
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5JbfAN8t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bdp40gvqdbdygj1xko6k.png" alt="t-SNE" width="880" height="430"&gt;
In this graph, the user can click on any node and the name of that agent will be displayed below, clicking on this again will enable the user to visualize the life of that agent which is similar to the feature in VitaViz tab.
&lt;/li&gt;
&lt;li&gt;VitaStats
The next tab is VitaStats. This helps the user to visualize the various statistics about the agents.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DjBd3Db8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nsjb13two9xsqtfygrjc.png" alt="VitaStats" width="880" height="410"&gt;
The location of log files is entered here. The location given as input here is similar to that of the location entered in the VitaGroups tab. After pressing the ‘Get Stats’ button, various graphs will be generated and displayed.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tmudvsm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gzhrnqkea9y85y1uu0aw.png" alt="Stats" width="880" height="409"&gt;
Here the user can click on any node of any graph to get a list of the agents born at that particular time-stamp. This list comes below the graphs. Clicking on any list item will enable the user to visualize the life of that agent.
The First Graph shows the relation between the average of the ages of the death of players born at a particular time and the time. This can be used to study the trends in the lifespans of the agents.
The second graph shows the relation between the variance in death ages of players born at a particular time and the time. This graph can be used to realize the similarities/dissimilarities in the death age of players born under at the same time and living in similar circumstances.
The third graph maps the relation between the Quality of Life and the time. Hereby quality of life we mean the count of players born at a particular time who survived for more than 50 timesteps (in ticks).
&lt;/li&gt;
&lt;li&gt;VitaLineage
The last tab is VitaLineage. This is used to visualize the family tree of that particular agent.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tm8jGi4n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jgavzezgrhu6yrdru5mp.png" alt="VitaLineage" width="880" height="418"&gt;
Give the location of a single log file of the agent and click on ‘Get Tree’. This gives the family tree of that agent. Click on any node to visualize the life of the agent.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gjmXBsUc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y2164l9f4lk76rgr84ra.png" alt="familytree" width="880" height="341"&gt;
This was all about VitaBoard and the features it has to offer.his is an open-source project and would also be enlisted in the Hacktoberfest. I hope you all would consider this project and make contributions to improve and enhance this project. The GitHub link of this project can be found &lt;a href="https://github.com/Project-DC/pygeneses"&gt;here&lt;/a&gt;. Hope to see you there.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>machinelearning</category>
      <category>reinforcementlearning</category>
      <category>ai</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Getting started with PyGeneses</title>
      <dc:creator>Siddhartha Dhar Choudhury</dc:creator>
      <pubDate>Sun, 06 Sep 2020 15:12:48 +0000</pubDate>
      <link>https://dev.to/projectdc/getting-started-with-pygeneses-1co2</link>
      <guid>https://dev.to/projectdc/getting-started-with-pygeneses-1co2</guid>
      <description>&lt;p&gt;What is PyGeneses? What is this blog post about? Well if you have no idea what PyGeneses is then I would suggest you to first go through the &lt;a href="https://dev.to/projectdc/introduction-to-pygeneses-26oc"&gt;introductory post&lt;/a&gt;. If you know the answer to these two questions, then you are in the right place. Let’s get started.&lt;/p&gt;

&lt;p&gt;In this post we will go through installation process and PyGeneses’ training and hyperparameter tuning packages.&lt;/p&gt;

&lt;h1&gt;
  
  
  Installation
&lt;/h1&gt;

&lt;p&gt;PyGeneses can be installed using pip in either your local system or a cloud based platform. The steps for installation will be the same for both cloud and own system. Let’s install PyGeneses:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;user@programmer:~&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;git+https://github.com/Project-DC/pygeneses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It will soon be available in pypi, till then installation will be done from the official GitHub repo for &lt;a href="https://github.com/Project-DC/pygeneses"&gt;PyGeneses&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  Let's start training
&lt;/h1&gt;
&lt;h2&gt;
  
  
  pygeneses.envs
&lt;/h2&gt;

&lt;p&gt;Now that we have installed PyGeneses, let’s write some code to train the PrimaVita agents. First, we will look at the most basic code example to train agents with default hyperparameter setting and REINFORCE algorithm (we will be adding more RL algorithms in near future).&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Yeah it is just these 3 lines. The three steps to train agents are:-&lt;/p&gt;

&lt;p&gt;1) Import PrimaVita (First life) class from envs.prima_vita.&lt;/p&gt;

&lt;p&gt;2) Instanstiate the PrimaVita class.&lt;/p&gt;

&lt;p&gt;3) Call the run() method on the PrimaVita object.&lt;/p&gt;

&lt;p&gt;That was super simple wasn’t it. This default setting will allow you to train the agents in ‘bot’ mode which means you will not be able to see the environment while training, in case you want to see the environment throughout training you just need to change the mode to ‘human’:-&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Let us now look at an example where the default hyperparameters are changed:-&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You just have to pass a dictionary with keys as the name of hyperparameter (here the initial energy of agents and the time in ticks after which model will be updated are changed) and the values as the value you want to specify. For a more detailed list of available hyperparameters refer to the &lt;a href="https://project-dc.github.io/docs"&gt;official docs&lt;/a&gt;. Just one more line and you can customize your training.&lt;/p&gt;

&lt;h2&gt;
  
  
  pygeneses.hypertune
&lt;/h2&gt;

&lt;p&gt;Now let’s move on to hyperparameter tuning package (hypertune) of PyGeneses. PyGeneses has two different algorithms for trying out different hyperparameter values:-&lt;/p&gt;

&lt;p&gt;1) &lt;strong&gt;Grid Search:&lt;/strong&gt; This takes in a pool of hyperparameter values as input and trains the model with all the possible combinations of these values. This is useful when you want to try out different values of a hyperparameter or group of hyperparameters.&lt;/p&gt;

&lt;p&gt;Following is a snippet of code for applying a grid search over pool of hyperparameters using hypertune:-&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This too in 3 lines of code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JNPRgNWp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/38rkrosckqdyasryof3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JNPRgNWp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/38rkrosckqdyasryof3u.png" alt="Alt Text" width="500" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again the three steps are as follows:-&lt;/p&gt;

&lt;p&gt;1) Import HyperTune class from pygeneses.hypertune.&lt;/p&gt;

&lt;p&gt;2) Instantiate HyperTune class with the name of environment class, hyperparameter list, hyperparameter values (2D list) and number of logs to be generated for each run.&lt;/p&gt;

&lt;p&gt;3) Finally call hypertuner( ) method of HyperTune class.&lt;/p&gt;

&lt;p&gt;The values are to be listed in order in which the hyperparameters are listed.&lt;/p&gt;

&lt;p&gt;Moving on to the next algorithm:-&lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;Randomized Search:&lt;/strong&gt; This is similar to grid search in that it too takes a pool of hyperparameter values but instead of training model with all possible combinations it selects a subset out of those combinations randomly based on a probability.&lt;/p&gt;

&lt;p&gt;Here is the code example:-&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The only change in this code and the grid search code is the extra parameter randomize_percent which tells the randomized search algorithm the percentage of combinations to pick up randomly from the pool of available hyperparameters.&lt;/p&gt;

&lt;p&gt;This was the end to this part of PyGeneses blog post series. In the next part we will talk about VitaBoard — an interactive dashboard for visualization of training results in PyGeneses which helps makes sense of the actions taken by agents during training and studying their behaviour. See you there :)&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>reinforcementlearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Introduction to PyGeneses</title>
      <dc:creator>Siddhartha Dhar Choudhury</dc:creator>
      <pubDate>Sun, 06 Sep 2020 14:58:43 +0000</pubDate>
      <link>https://dev.to/projectdc/introduction-to-pygeneses-26oc</link>
      <guid>https://dev.to/projectdc/introduction-to-pygeneses-26oc</guid>
      <description>&lt;p&gt;If I try to summarize PyGeneses in a single line then it would be something like — “PyGeneses is a PyTorch based Deep Reinforcement Learning (Deep RL) framework that helps users to simulate artificial agents in bio-inspired environments”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jRUQw0Is--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eqterti3t8h7ixcvq5zp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jRUQw0Is--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eqterti3t8h7ixcvq5zp.jpg" alt="Alt Text" width="746" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s break down the above line:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PyTorch:&lt;/strong&gt; PyTorch is a machine learning framework developed by Facebook AI Research (FAIR). It has an intuitive python like syntax which allows faster prototyping and serving of ML/DL models. We use PyTorch for the deep reinforcement learning algorithms that we provide as part of our framework.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deep Reinforcement Learning:&lt;/strong&gt; Deep Reinforcement Learning is a technique in Artificial Intelligence that brings together the best of Deep Learning and Reinforcement Learning. Deep learning deals with the concept of neural networks which are capable of learning complex functions from data, and Reinforcement Learning is based on the idea of learning from experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Artificial Agents:&lt;/strong&gt; Artificial agents are simulated and simplified versions of species found in nature. The rules these agents follow closely mimic that of nature which allows them to display behaviour close to what animals around us (or even human beings) display. Note: These agents are nowhere close to the complexity of living beings in our planet and are just a simplified version of them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bio-inspired environments:&lt;/strong&gt; Bio-inspired environments mimic conditions of survival and rules that we see in our natural environment in simulation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  About the framework
&lt;/h1&gt;

&lt;p&gt;PyGeneses is a collection of four different packages:-&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pygeneses.envs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module allows you to create, configure and tweak the in-built bio-inspired environments. As of now, this only provides a single environment called Prima Vita (First Life), but there’s more coming soon! This lets you set up the entire environment and the species in just a few lines of code and provides both high level API and low level control over the environment. Training using the API includes logging of every action of an agent so that it can be studied using VitaBoard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pygeneses.models&lt;/strong&gt;&lt;br&gt;
The ‘models’ module is what allows us to import the neural networks which the species uses to learn what to do. As of now, only the default model’s (REINFORCE) implementation is provided, but we will be adding support for custom pluggable networks from v0.2 onwards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pygeneses.hypertune&lt;/strong&gt;&lt;br&gt;
The ‘HyperTune’ package allows us to configure and test out various hyperparameters we can provide for an environment and species (a list of hyperparameters is provided in the Classes section of this documentation). This contains single hyperparameter testing, grid search and randomized search. This allows us to find the best set of hyperparameters to display a type of behavior. This also produces logs which we can study using Vitaboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pygeneses.vitaboard&lt;/strong&gt;&lt;br&gt;
Vitaboard provides an advanced, interactive dashboard to study agents after the training phase. After each agent dies, his/her actions are written into a log file. And vitaboard allows us to visualize the agent’s life. It provides us with a life visualizer, group statistics and a genetic history visualizer. It allows us to identify and understand behaviours exhibited by an agent while interacting with the environment or with other agents in the environment.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use cases
&lt;/h1&gt;

&lt;p&gt;Pygeneses can be used for a variety of purposes, the limit only being your creativity and imagination. That being said, some of the use-cases of pygeneses can be:&lt;/p&gt;

&lt;p&gt;1) To understand basic psychology — Since every agent has a limited actions to choose from in each time step, the simulation can be studied to infer the basic psychological patterns displayed, this can help to understand the similarities/dissimilarities between the agents and real life organisms.&lt;/p&gt;

&lt;p&gt;2) To create applications/games based on the framework — You can create basic applications/games based on our framework. An example can be a productivity app that adds food to the simulation only when the user completes a task, hence staking the lives of the agents of the user’s own simulation in exchange for productivity.&lt;/p&gt;

&lt;p&gt;3) To learn more about Deep Reinforcement Learning — Users can just tinker around with different hyperparameters and models and observe the results visually, this makes for a interesting way to get an insight into the world of Deep Reinforcement Learning.&lt;/p&gt;

&lt;h1&gt;
  
  
  Target Audience
&lt;/h1&gt;

&lt;p&gt;Due to the simple high level API of PyGeneses I believe that this tool can be helpful for both programmer and non-programmers who want to understand the nature of behaviours in animals or understand concepts of Deep Reinforcement Learning.&lt;/p&gt;

&lt;p&gt;Most of our API requires only 3–4 lines of code to modify the environment according to your needs, thus allowing users with little to no knowledge of ML or Deep RL to use our tool easily.&lt;/p&gt;

&lt;p&gt;We (the developers of PyGeneses) believe that understanding the reason of display of certain behaviours is not only an active area of study but also a fun experiment and through our framework we want to provide a platform for carrying out both of these tasks.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final words
&lt;/h1&gt;

&lt;p&gt;If you are someone who is keen to learn about ML and AI, or a scientist who studies behaviour or even someone who just wants to have some fun experimenting with these artificial agents then get your local installation of PyGeneses today.&lt;/p&gt;

&lt;p&gt;For a detailed introduction (installation and code examples) follow the subsequent articles on this:-&lt;/p&gt;

&lt;p&gt;1) &lt;a href="https://dev.to/projectdc/getting-started-with-pygeneses-1co2"&gt;Getting started with PyGeneses&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) Studying logs with VitaBoard&lt;/p&gt;

&lt;p&gt;If you want to contribute to this project or simply want to know more, then follow the &lt;a href="https://project-dc.github.io/docs/"&gt;official docs&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>reinforcementlearning</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
