DEV Community

Valery Odinga
Valery Odinga

Posted on

Building a Concurrent TCP Chat Server in Go (NetCat Clone)

In this project, we built a simplified version of the classic NetCat ("nc") tool — a TCP-based chat server that allows multiple clients to connect, send messages, and interact in real time.

The goal was not just to recreate a chat system, but to deeply understand:

  • TCP networking
  • Go concurrency (goroutines & channels)
  • State management in concurrent systems
  • Client-server architecture

At its core, the system needed to:

  • Accept multiple client connections
  • Allow clients to send messages
  • Broadcast messages to other clients
  • Track when users join or leave
  • Handle unexpected disconnects (like Ctrl+C)

This introduces a key challenge:

«Multiple clients interacting with shared state at the same time.»


TCP Server Basics

The server listens for incoming connections using:

listener, _ := net.Listen("tcp", ":8989")

Then continuously accepts clients:

for {
    conn, _ := listener.Accept()
    go handleConnection(conn)
}
Enter fullscreen mode Exit fullscreen mode

Important concept:

  • "Accept()" blocks until a client connects
  • Each client is handled in a separate goroutine

This allows multiple users to connect simultaneously.


Goroutines and Concurrency

Each client runs in its own goroutine:

go handleConnection(conn)

This means:

  • One slow client does not block others
  • Each connection is handled independently

However, this introduces a problem:

«Multiple goroutines modifying shared data can cause race conditions.»


The Shared State Problem

We needed to track all connected clients:

var connections map[net.Conn]string

But multiple goroutines might:

  • Add clients
  • Remove clients
  • Broadcast messages

At the same time.

This can cause:

  • Data corruption
  • Crashes ("fatal error: concurrent map writes")

Solution 1: Mutex

One approach is using a mutex:

mu.Lock()
connections[conn] = name
mu.Unlock()
Enter fullscreen mode Exit fullscreen mode

But this introduces:

  • Complexity
  • Potential deadlocks
  • Performance bottlenecks

Solution 2: Channels (The Go Way)

Instead of sharing memory, we used channels to communicate changes.

This follows Go’s philosophy:

«“Do not communicate by sharing memory; share memory by communicating.”»


ChatRoom Architecture

We designed a "ChatRoom" struct:


type ChatRoom struct {
    chatters map[*Client]struct{}
    history  []string

    Register   chan *Client
    Unregister chan *Client
    Broadcast  chan Message
}

Enter fullscreen mode Exit fullscreen mode
  • Only one goroutine manages "chatters" and "history"
  • Other goroutines send events via channels

The Event Loop

The core of the system is the "Run()" method:


for {
    select {
    case client := <-Register:
    case client := <-Unregister:
    case msg := <-Broadcast:
    }
}
Enter fullscreen mode Exit fullscreen mode

This acts like a central controller.


Handling Events

1. Client Join

  • Ask for name
  • Add to chatters map
  • Send chat history
  • Broadcast join message

cr.chatters[client] = struct{}{}


2. Message Broadcast

  • Append to history
  • Send to all clients except sender
for c := range cr.chatters {
    if c != sender {
        c.receive <- message
    }
}
Enter fullscreen mode Exit fullscreen mode

3. Client Leave

  • Remove from map
  • Close channel
  • Broadcast leave message

Handling Ctrl+C (Unexpected Disconnects)

When a client presses Ctrl+C:

  • TCP connection closes
  • "ReadString()" returns "io.EOF"

We detect this:

if err != nil {
// client disconnected
}

And broadcast:

"%s has left the chat"


Message Flow

Here’s how a message travels:

  1. Client sends message
  2. Goroutine reads it
  3. Sends it to "Broadcast" channel
  4. "Run()" receives it
  5. Loops through clients
  6. Sends message to each client’s "receive" channel
  7. Client writer goroutine prints it

Main Concepts were:

1. TCP is Just a Stream

  • Everything is bytes
  • Messages are manually structured ("\n")

_

  1. Blocking is Normal_
  • "Accept()" blocks waiting for connections
  • "ReadString()" blocks waiting for input

But only within their goroutine.


3. Goroutines Enable Concurrency

  • Lightweight threads managed by Go
  • Thousands can run efficiently

4. Channels Simplify Concurrency

  • Avoid shared memory issues
  • Centralize state management
  • Create predictable flow

Challenges Faced

  • Handling empty messages
  • Debugging raw string issues
  • Understanding blocking behavior
  • Managing client disconnects
  • Avoiding race conditions

This project goes beyond just building a chat app.

It revealed:

  • How real-time systems work
  • How servers handle multiple users
  • How to design safe concurrent programs

In many ways, this is a mini version of real-world systems like chat apps, multiplayer servers, and messaging platforms.


Building this TCP chat server helped me understand how powerful Go is for concurrent systems.

By combining:

  • TCP networking
  • Goroutines
  • Channels

we can build scalable, real-time applications with relatively simple code.


Top comments (0)