<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ankit Upadhyay</title>
    <description>The latest articles on DEV Community by Ankit Upadhyay (@ankit_upadhyay_1c38ae52c0).</description>
    <link>https://dev.to/ankit_upadhyay_1c38ae52c0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ankit_upadhyay_1c38ae52c0"/>
    <language>en</language>
    <item>
      <title>Implementing PPO for Cartpole-v1</title>
      <dc:creator>Ankit Upadhyay</dc:creator>
      <pubDate>Fri, 24 Jan 2025 17:31:43 +0000</pubDate>
      <link>https://dev.to/ankit_upadhyay_1c38ae52c0/implementing-ppo-for-cartpole-v1-1acd</link>
      <guid>https://dev.to/ankit_upadhyay_1c38ae52c0/implementing-ppo-for-cartpole-v1-1acd</guid>
      <description>&lt;p&gt;In my last post I had tried to implement a DQN model for a Chess bot, progressing further I tried to implement PPO for a comparatively simpler problem so that I can actually measure the performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Proximal Policy Optimization?
&lt;/h3&gt;

&lt;p&gt;PPO (Proximal Policy Optimization) trains an agent to model the optimal actions for a given state by leveraging an advantage function. The advantage function is computed using a critic, which evaluates how favorable a specific state and action are relative to the current policy.&lt;/p&gt;




&lt;h3&gt;
  
  
  Points of Interest
&lt;/h3&gt;

&lt;p&gt;While PPO had many important components that were needed to properly be implemented, I would like to cover some that were most impactful for me.&lt;/p&gt;

&lt;h4&gt;
  
  
  Log probability instead of Argmax:
&lt;/h4&gt;

&lt;p&gt;One of the first mistakes I made was taking the action with the max probability after the softmax layer. This decision turned out to be flawed as it limited exploration. Using random action based on those probability served very useful in training process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; dist = torch.distributions.Categorical(probs=prob)
 action=dist.sample()
 log_prob=dist.log_prob(action)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Generalized Advantage Estimation (GAE) instead of TDA
&lt;/h4&gt;

&lt;p&gt;Probably the most important mistake in my implementation was using Temporal Difference Advantage method to calculate the Advantage, this caused high bias and messed with the loss.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for t in reversed(range(T)):
            delta = reward_col[t] + self.gamma * value_col[t + 1] * (1 - dones[t]) - value_col[t]
            gae = delta + self.gamma * self.lamda * gae * (1 - dones[t])
            advantage[t] = gae
            return_col[t] = gae + value_col[t]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GAE applied TD Error recursively over multiple timesteps to smoothen the calculation. &lt;/p&gt;




&lt;h3&gt;
  
  
  My Final Implementation
&lt;/h3&gt;

&lt;p&gt;You can check out my final implementation at &lt;a href="https://www.kaggle.com/code/ankitupadhyay12/ppo-cart" rel="noopener noreferrer"&gt;https://www.kaggle.com/code/ankitupadhyay12/ppo-cart&lt;/a&gt;, It's definitely not perfect but served as a good jumping off point. Feel free to point out any mistake I might have made 🫠😅 &lt;/p&gt;




&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1050111456" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building a Chess Agent using DQN</title>
      <dc:creator>Ankit Upadhyay</dc:creator>
      <pubDate>Sun, 29 Dec 2024 10:51:34 +0000</pubDate>
      <link>https://dev.to/ankit_upadhyay_1c38ae52c0/building-a-chess-agent-using-dqn-40po</link>
      <guid>https://dev.to/ankit_upadhyay_1c38ae52c0/building-a-chess-agent-using-dqn-40po</guid>
      <description>&lt;h3&gt;
  
  
  I recently tried to implement a DQN based Chess Agent.
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Now, anyone who knows how DQNs and Chess works would tell you that's a dumb idea.
&lt;/h4&gt;

&lt;h4&gt;
  
  
  And...it was, but as a beginner I enjoyed it nevertheless. In this article I'll share the insights I learned while working on this.
&lt;/h4&gt;




&lt;h2&gt;
  
  
  Understanding the Environment.
&lt;/h2&gt;

&lt;p&gt;Before I started implementing the Agent itself, I had to familiarize myself with the environment I'll be using and make a custom wrapper on top of it so that it can interact with the Agent during training.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;I used the chess environment from the kaggle_environments library.&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; from kaggle_environments import make
 env = make("chess", debug=True)
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;I also used Chessnut, which is a lightweight python library that helps parse and validate chess games.&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; from Chessnut import Game
 initial_fen = env.state[0]['observation']['board']
 game=Game(env.state[0]['observation']['board'])
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  In this environment, the state of the board is stored in the FEN format.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5uo6m742w3w88gp0km3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5uo6m742w3w88gp0km3.png" alt="Chess Board State" width="702" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It provides a compact way to represent all the pieces on the board and the currently active player. However, since I planned on feeding the input to a neural network, I had to modify the representation of the state.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Converting FEN to Matrix format
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42q628xxo8ftsvzz0535.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42q628xxo8ftsvzz0535.png" alt="FEN to Matrix" width="692" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Since there are 12 different types of pieces on a board, I created 12 channels of 8x8 grids to represent the state of each of those types on the board.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Creating a Wrapper for the Environment&lt;br&gt;&lt;br&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class EnvCust:
    def __init__(self):
        self.env = make("chess", debug=True)
        self.game=Game(env.state[0]['observation']['board'])
        print(self.env.state[0]['observation']['board'])
        self.action_space=game.get_moves();
        self.obs_space=(self.env.state[0]['observation']['board'])

    def get_action(self):
        return Game(self.env.state[0]['observation']['board']).get_moves();


    def get_obs_space(self):
        return fen_to_board(self.env.state[0]['observation']['board'])

    def step(self,action):
        reward=0
        g=Game(self.env.state[0]['observation']['board']);
        if(g.board.get_piece(Game.xy2i(action[2:4]))=='q'):
            reward=7
        elif g.board.get_piece(Game.xy2i(action[2:4]))=='n' or g.board.get_piece(Game.xy2i(action[2:4]))=='b' or g.board.get_piece(Game.xy2i(action[2:4]))=='r':
            reward=4
        elif g.board.get_piece(Game.xy2i(action[2:4]))=='P':
            reward=2
        g=Game(self.env.state[0]['observation']['board']);
        g.apply_move(action)
        done=False
        if(g.status==2):
            done=True
            reward=10
        elif g.status == 1:  
            done = True
            reward = -5 
        self.env.step([action,'None'])
        self.action_space=list(self.get_action())
        if(self.action_space==[]):
            done=True
        else:
            self.env.step(['None',random.choice(self.action_space)])
            g=Game(self.env.state[0]['observation']['board']);
            if g.status==2:
                reward=-10
                done=True

        self.action_space=list(self.get_action())
        return self.env.state[0]['observation']['board'],reward,done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The point of this wrapper was to provide a reward policy for the agent and a step function which is used to interact with the environment during training.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chessnut was useful in getting information like the legal moves possible at current state of the board and also to recognize Checkmates during the game.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I tried to create a reward policy to give positive points for checkmates and taking out enemy pieces while negative points for losing the game.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Creating a Replay Buffer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru96c7givvvwi8wiwooe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru96c7givvvwi8wiwooe.png" alt="Replay Buffer" width="791" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Replay Buffer is used during the training period to save the (state,action,reward,next state) output by the Q-Network and later used randomly for backpropagation of the Target Network&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Auxiliary Functions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop114ky07ma3r6a95vqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop114ky07ma3r6a95vqf.png" alt="UCI format to index" width="638" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4boe2on6ejjvgsu1p6sp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4boe2on6ejjvgsu1p6sp.png" alt="index to UCI format" width="652" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Chessnut returns legal action in UCI format which looks like 'a2a3', however to interact with the Neural Network I converted each action into a distinct index using a basic pattern. There are total 64 Squares, so I decided to have 64*64 unique indexes for each move.&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;I know that not all of the 64*64 moves would be legal, but I could handle legality using Chessnut and the pattern was simple enough.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Neural Network Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import torch
import torch.nn as nn
import torch.optim as optim

class DQN(nn.Module):
    def __init__(self):  
        super(DQN, self).__init__()
        self.conv_layers = nn.Sequential(
            nn.Conv2d(12, 32, kernel_size=3, stride=1, padding=
            nn.ReLU(),  
            nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),  
            nn.ReLU()
        )
        self.fc_layers = nn.Sequential(
            nn.Flatten(),
            nn.Linear(64 * 8 * 8, 256),
            nn.ReLU(), 
            nn.Linear(256, 128),
            nn.ReLU(), 
            nn.Linear(128, 4096)
        )

    def forward(self, x):
        x = x.unsqueeze(0)
        x = self.conv_layers(x)  
        x = self.fc_layers(x)  
        return x

    def predict(self, state, valid_action_indices):
        with torch.no_grad(): 
            q_values = self.forward(state) 
            q_values = q_values.squeeze(0)  


            valid_q_values = q_values[valid_action_indices]


            best_action_relative_index = valid_q_values.argmax().item()  
            max_q_value=valid_q_values.argmax()
            best_action_index = valid_action_indices[best_action_relative_index] 

            return max_q_value, best_action_index

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;This Neural Network uses the Convolutional Layers to take in the 12 channel input and also uses the valid action indexes to filter out the reward output prediction.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Implementing the Agent &lt;br&gt;&lt;br&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = DQN().to(device)  # The current Q-network
target_network = DQN().to(device)  # The target Q-network
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
replay_buffer = ReplayBuffer(buffer_size=10000)
epsilon = 0.5  
gamma = 0.99 
batch_size=15
def train(episodes):
    for ep in range(1,episodes+1):
        print('Episode Number:',ep)
        myenv=EnvCust()
        done=False
        state=myenv.obs_space
        i=0
        while not done and i&amp;lt;100:
            print('i is: ',i)
            actions=myenv.get_action()
            if(actions==[]):
                break
            a_index=action_index(actions)
            if random.random() &amp;lt; epsilon:
                action = random.choice(actions)  
            else:
                input_state=torch.tensor(fen_to_board(state), dtype=torch.float32, requires_grad=True).to(device)
                q_val,action = model.predict(input_state,a_index)
                action=action_index_to_uci(action)


            next_state,reward,done= myenv.step(action)
            replay_buffer.add(state,action,reward,next_state,done)

            state=next_state
            i=i+1
            if replay_buffer.size() &amp;gt; batch_size:
                mini_batch = replay_buffer.sample(batch_size)
                for e in mini_batch:
                    state, action, reward, next_state, done = e
                    g=Game(next_state)
                    act=g.get_moves();
                    ind_a=action_index(act)
                    input_state=torch.tensor(fen_to_board(next_state), dtype=torch.float32, requires_grad=True).to(device)
                    tpred,_=target_network.predict(input_state,ind_a)
                    target = reward + gamma * tpred * (1 - done)

                    act_ind=uci_to_action_index(action)
                    input_state2=torch.tensor(fen_to_board(state), dtype=torch.float32, requires_grad=True).to(device)
                    current_q_value =model(input_state2)[0,act_ind]

                    loss = (current_q_value - target) ** 2
                    optimizer.zero_grad()
                    loss.backward()
                    optimizer.step()
            if ep % 5 == 0:
                target_network.load_state_dict(model.state_dict())



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;This was obviously a very basic model which had no chance of actually performing well (And it didn't), but it did help me understand how DQNs work a little better.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8xommo8aa9hcn2210jw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8xommo8aa9hcn2210jw.gif" alt="done and done" width="480" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
