DEV Community

RYU JAEMIN
RYU JAEMIN

Posted on

[BlindSpot] Log 01. Architecture

What is the 'BlindSpot'?

BlindSpot is the 3:3 top view shooting online game I'm developing.
github : https://github.com/ryujm1828/BlindSpot

Whole system architecture

BlindSpot's client and server operate independently and communicate via an efficient binary protocol.

  • Server: C++ (asynchronous I/O based Boost.Asio)
  • Client: Unity (C#)
  • Protocol: Google Protocol Buffers (Protobuf)
  • IDE : Visual Studio 2026

Why C++ and Unity?

In real-time multiplayer games, the performance of server is directly connected to user experience.
I chose C++ for server to handle a lot of packets without delay, and provide stable concurrency.
I choce Unity for client, because I've used it before, and this project won't be high quality graphics.

Features

1.Efficient Packet Processing

Using protobuf

syntax = "proto3";
package blindspot;

enum PacketID {
    ID_NONE = 0;
    ID_LOGIN_REQUEST = 1;
    ID_LOGIN_RESPONSE = 2;
    ID_JOIN_ROOM_REQUEST = 3;
    ID_JOIN_ROOM_RESPONSE = 4;
    ID_MAKE_ROOM_REQUEST = 5;
    ID_MAKE_ROOM_RESPONSE = 6;
}
message LoginRequest {
  string name = 1;
  string session_key = 2;  
}
Enter fullscreen mode Exit fullscreen mode

With this code, I prevent human error by automatically generating communication specifications between C++ servers and C# clients with a single .proto file.

How it works

Server -> Client
//BlindSpotServer/Network/Session.h
struct PacketHeader {
    uint16_t length;
    uint16_t id;
};
//...
void Send(uint16_t id, google::protobuf::Message& msg) {
        std::string payload;
        msg.SerializeToString(&payload);

        uint16_t header_size = sizeof(PacketHeader);
        uint16_t payload_size = static_cast<uint16_t>(payload.size());
        uint16_t total_size = header_size + payload_size;

        auto send_buffer = std::make_shared<std::vector<uint8_t>>(total_size);

        PacketHeader* header = reinterpret_cast<PacketHeader*>(send_buffer->data());
        header->length = total_size;
        header->id = id;

        std::memcpy(send_buffer->data() + header_size, payload.data(), payload_size);

        auto self(shared_from_this());
        boost::asio::async_write(socket_, boost::asio::buffer(*send_buffer),
            [this, self, send_buffer](boost::system::error_code ec, std::size_t /*length*/) {
                if (ec) {
                    std::cout << "Send failed: " << ec.message() << std::endl;
                    PlayerManager::Instance().Remove(shared_from_this());
                }
            });
}
Enter fullscreen mode Exit fullscreen mode

When server sends response to client, it first serialize the payload.
And adds a header containing the packet type identifier(Id) and packet length.
Lastly, send a packet to the client via async_write.
At this time, since the Session object including the Send function must not disappear during async_write, this is prevented through auto self(shared_from_this());

//BlindSpotClient/Assets/Scripts/Network
/NetworkManager.cs
private void OnReceiveData(IAsyncResult ar)
    {
        try
        {
            int bytesRead = stream.EndRead(ar);
            if (bytesRead <= 0)
            {
                Debug.Log("[Client] Disconnected from server.");
                CloseConnection();
                return;
            }

            //Add received data to assemble buffer
            byte[] temp = new byte[bytesRead];
            Array.Copy(recvBuffer, 0, temp, 0, bytesRead);
            assembleBuffer.AddRange(temp);

            // Process complete packets
            while (assembleBuffer.Count >= 4) 
            {
                ushort packetSize = BitConverter.ToUInt16(assembleBuffer.ToArray(), 0);

                if (assembleBuffer.Count < packetSize) break;

                ushort packetID = BitConverter.ToUInt16(assembleBuffer.ToArray(), 2);

                byte[] payload = new byte[packetSize - 4];
                Array.Copy(assembleBuffer.ToArray(), 4, payload, 0, payload.Length);

                HandlePacket((PacketID)packetID, payload);

                assembleBuffer.RemoveRange(0, packetSize);
            }

            stream.BeginRead(recvBuffer, 0, recvBuffer.Length, new AsyncCallback(OnReceiveData), null);
        }
        catch (Exception e)
        {
            Debug.LogError($"[Client] Receive Error: {e.Message}");
            CloseConnection();
        }
    }
Enter fullscreen mode Exit fullscreen mode

TCP sends data as a stream.
So I had it reassemble the packets as it received the data flow in real time.
The size of the packet is found out through the header, and if the number of packets corresponding to the size is not enough, more data is received, and if the number of packets corresponding to the size is enough, the packets are assembled and then processed.
(For simplicity in this example, I used ToArray(), but it can be optimized)

Client -> Server

//BlindSpotClient/Assets/Scripts/Network
/NetworkManager.cs
    public void Send(PacketID id, IMessage packet)
    {
        if (client == null || !client.Connected) return;

        try
        {
            byte[] payload = packet.ToByteArray();
            ushort payloadSize = (ushort)payload.Length;
            ushort headerSize = 4;
            ushort totalSize = (ushort)(headerSize + payloadSize);

            byte[] sendBuffer = new byte[totalSize];

            Array.Copy(BitConverter.GetBytes(totalSize), 0, sendBuffer, 0, 2);
            Array.Copy(BitConverter.GetBytes((ushort)id), 0, sendBuffer, 2, 2);
            Array.Copy(payload, 0, sendBuffer, 4, payloadSize);

            stream.Write(sendBuffer, 0, sendBuffer.Length);
            Debug.Log($"[Client] Sent Packet ID: {id}");
        }
        catch (Exception e)
        {
            Debug.LogError($"[Client] Send Error: {e.Message}");

Enter fullscreen mode Exit fullscreen mode

Make header and assemble whole packet.
And send the packet to Server with stream.Write().
It is safe now, however, depending on the server architecture, endianness must be taken into account.

//BlindSpotServer/Network/Session.h
void DoRead() {
        auto self(shared_from_this());
        // Wait for data asynchronously
        socket_.async_read_some(boost::asio::buffer(data_, max_length),
            [this, self](boost::system::error_code ec, std::size_t length) {
                if (!ec) {
                    std::cout << "[Debug] Raw Data Length: " << length << std::endl;
                    recv_buffer_.insert(recv_buffer_.end(), data_, data_ + length);

                    while (recv_buffer_.size() >= sizeof(PacketHeader)) {
                        PacketHeader* header = reinterpret_cast<PacketHeader*>(recv_buffer_.data());

                        std::cout << "[Debug] Expected Packet Length: " << header->length << std::endl;
                        std::cout << "[Debug] Current Buffer Size: " << recv_buffer_.size() << std::endl;
                        if (recv_buffer_.size() < header->length) {
                            break; // Not enough data for a full packet
                        }

                        std::cout << "[Debug] Packet ID: " << header->id << ", Length: " << header->length << std::endl;
                        // Process complete packet
                        uint16_t packet_id = header->id;
                        uint8_t* payload = recv_buffer_.data() + sizeof(PacketHeader);
                        uint16_t payload_size = header->length - sizeof(PacketHeader);

                        HandlePacket(packet_id, payload, payload_size);

                        recv_buffer_.erase(recv_buffer_.begin(), recv_buffer_.begin() + header->length);
                    }

                    DoRead(); // Wait for more data
                }
                else {
                    std::cout << "Client Disconnected." << ec.message() << std::endl;
                    PlayerManager::Instance().Remove(shared_from_this());
                }
            });

    }
Enter fullscreen mode Exit fullscreen mode

Like the client, it is implemented in a way that receives data in real time and reassembles packets.

Asynchronous I/O

//BlindSpotServer/main.cpp
try {
        boost::asio::io_context io_context;

        std::cout << "Server starting on port "<< PORT << "..." << std::endl;
        Server s(io_context, PORT);

        io_context.run(); // Start the server event loop
    }
Enter fullscreen mode Exit fullscreen mode

Rather than having a single thread handle a single connection, we detect OS-level asynchronous events through io_context.
This structure allows us to efficiently manage tens of thousands of connections with a small number of threads.

Memory Management

To prevent memory leaks and dangling pointers, which are the most dangerous elements in C++ servers, we actively utilized Smart Pointer(shared_ptr, unique_ptr).
The Session object inherits enable_shared_from_this to safely ensure the object's lifecycle within asynchronous handlers.

Scalability

Modifying networking code every time a new game feature is added is risky.
BlindSpot uses the Packet Handler pattern to automatically branch the logic based on the received packet ID.

Security

The server does not trust the client, and all logic is designed to operate through the session context, not the playerId sent by the client.
(If you find any vulnerabilities, please let me know.)

Challenges ahead

Now that I've implemented the ability to create and join a room, let's implement the actual player movement and attacking in the game.

Top comments (0)