DEV Community

Cover image for Building a Chess Agent using DQN
Ankit Upadhyay
Ankit Upadhyay

Posted on

Building a Chess Agent using DQN

I recently tried to implement a DQN based Chess Agent.

Now, anyone who knows how DQNs and Chess works would tell you that's a dumb idea.

And...it was, but as a beginner I enjoyed it nevertheless. In this article I'll share the insights I learned while working on this.


Understanding the Environment.

Before I started implementing the Agent itself, I had to familiarize myself with the environment I'll be using and make a custom wrapper on top of it so that it can interact with the Agent during training.

  • I used the chess environment from the kaggle_environments library.

     from kaggle_environments import make
     env = make("chess", debug=True)
    
  • I also used Chessnut, which is a lightweight python library that helps parse and validate chess games.

     from Chessnut import Game
     initial_fen = env.state[0]['observation']['board']
     game=Game(env.state[0]['observation']['board'])
    

In this environment, the state of the board is stored in the FEN format.

Chess Board State

It provides a compact way to represent all the pieces on the board and the currently active player. However, since I planned on feeding the input to a neural network, I had to modify the representation of the state.


Converting FEN to Matrix format

FEN to Matrix

Since there are 12 different types of pieces on a board, I created 12 channels of 8x8 grids to represent the state of each of those types on the board.


Creating a Wrapper for the Environment

class EnvCust:
    def __init__(self):
        self.env = make("chess", debug=True)
        self.game=Game(env.state[0]['observation']['board'])
        print(self.env.state[0]['observation']['board'])
        self.action_space=game.get_moves();
        self.obs_space=(self.env.state[0]['observation']['board'])

    def get_action(self):
        return Game(self.env.state[0]['observation']['board']).get_moves();


    def get_obs_space(self):
        return fen_to_board(self.env.state[0]['observation']['board'])

    def step(self,action):
        reward=0
        g=Game(self.env.state[0]['observation']['board']);
        if(g.board.get_piece(Game.xy2i(action[2:4]))=='q'):
            reward=7
        elif g.board.get_piece(Game.xy2i(action[2:4]))=='n' or g.board.get_piece(Game.xy2i(action[2:4]))=='b' or g.board.get_piece(Game.xy2i(action[2:4]))=='r':
            reward=4
        elif g.board.get_piece(Game.xy2i(action[2:4]))=='P':
            reward=2
        g=Game(self.env.state[0]['observation']['board']);
        g.apply_move(action)
        done=False
        if(g.status==2):
            done=True
            reward=10
        elif g.status == 1:  
            done = True
            reward = -5 
        self.env.step([action,'None'])
        self.action_space=list(self.get_action())
        if(self.action_space==[]):
            done=True
        else:
            self.env.step(['None',random.choice(self.action_space)])
            g=Game(self.env.state[0]['observation']['board']);
            if g.status==2:
                reward=-10
                done=True

        self.action_space=list(self.get_action())
        return self.env.state[0]['observation']['board'],reward,done
Enter fullscreen mode Exit fullscreen mode

The point of this wrapper was to provide a reward policy for the agent and a step function which is used to interact with the environment during training.

Chessnut was useful in getting information like the legal moves possible at current state of the board and also to recognize Checkmates during the game.

I tried to create a reward policy to give positive points for checkmates and taking out enemy pieces while negative points for losing the game.


Creating a Replay Buffer

Replay Buffer

Replay Buffer is used during the training period to save the (state,action,reward,next state) output by the Q-Network and later used randomly for backpropagation of the Target Network


Auxiliary Functions

UCI format to index

index to UCI format

Chessnut returns legal action in UCI format which looks like 'a2a3', however to interact with the Neural Network I converted each action into a distinct index using a basic pattern. There are total 64 Squares, so I decided to have 64*64 unique indexes for each move.
I know that not all of the 64*64 moves would be legal, but I could handle legality using Chessnut and the pattern was simple enough.


Neural Network Structure

import torch
import torch.nn as nn
import torch.optim as optim

class DQN(nn.Module):
    def __init__(self):  
        super(DQN, self).__init__()
        self.conv_layers = nn.Sequential(
            nn.Conv2d(12, 32, kernel_size=3, stride=1, padding=
            nn.ReLU(),  
            nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),  
            nn.ReLU()
        )
        self.fc_layers = nn.Sequential(
            nn.Flatten(),
            nn.Linear(64 * 8 * 8, 256),
            nn.ReLU(), 
            nn.Linear(256, 128),
            nn.ReLU(), 
            nn.Linear(128, 4096)
        )

    def forward(self, x):
        x = x.unsqueeze(0)
        x = self.conv_layers(x)  
        x = self.fc_layers(x)  
        return x

    def predict(self, state, valid_action_indices):
        with torch.no_grad(): 
            q_values = self.forward(state) 
            q_values = q_values.squeeze(0)  


            valid_q_values = q_values[valid_action_indices]


            best_action_relative_index = valid_q_values.argmax().item()  
            max_q_value=valid_q_values.argmax()
            best_action_index = valid_action_indices[best_action_relative_index] 

            return max_q_value, best_action_index

Enter fullscreen mode Exit fullscreen mode

This Neural Network uses the Convolutional Layers to take in the 12 channel input and also uses the valid action indexes to filter out the reward output prediction.


Implementing the Agent

model = DQN().to(device)  # The current Q-network
target_network = DQN().to(device)  # The target Q-network
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
replay_buffer = ReplayBuffer(buffer_size=10000)
epsilon = 0.5  
gamma = 0.99 
batch_size=15
def train(episodes):
    for ep in range(1,episodes+1):
        print('Episode Number:',ep)
        myenv=EnvCust()
        done=False
        state=myenv.obs_space
        i=0
        while not done and i<100:
            print('i is: ',i)
            actions=myenv.get_action()
            if(actions==[]):
                break
            a_index=action_index(actions)
            if random.random() < epsilon:
                action = random.choice(actions)  
            else:
                input_state=torch.tensor(fen_to_board(state), dtype=torch.float32, requires_grad=True).to(device)
                q_val,action = model.predict(input_state,a_index)
                action=action_index_to_uci(action)


            next_state,reward,done= myenv.step(action)
            replay_buffer.add(state,action,reward,next_state,done)

            state=next_state
            i=i+1
            if replay_buffer.size() > batch_size:
                mini_batch = replay_buffer.sample(batch_size)
                for e in mini_batch:
                    state, action, reward, next_state, done = e
                    g=Game(next_state)
                    act=g.get_moves();
                    ind_a=action_index(act)
                    input_state=torch.tensor(fen_to_board(next_state), dtype=torch.float32, requires_grad=True).to(device)
                    tpred,_=target_network.predict(input_state,ind_a)
                    target = reward + gamma * tpred * (1 - done)

                    act_ind=uci_to_action_index(action)
                    input_state2=torch.tensor(fen_to_board(state), dtype=torch.float32, requires_grad=True).to(device)
                    current_q_value =model(input_state2)[0,act_ind]

                    loss = (current_q_value - target) ** 2
                    optimizer.zero_grad()
                    loss.backward()
                    optimizer.step()
            if ep % 5 == 0:
                target_network.load_state_dict(model.state_dict())



Enter fullscreen mode Exit fullscreen mode

This was obviously a very basic model which had no chance of actually performing well (And it didn't), but it did help me understand how DQNs work a little better.

done and done

Top comments (0)