Real-Time Trading App: Golang, Kafka, Websockets — Intro & Setup (PART-1)
High-performance real-time trading engine with Golang, Kafka, and Websockets…
This blog will be structured into four parts:
- Introduction & Setup
- Golang Integration with Kafka
- Consumer Service & Websockets Implementation
- Frontend Development
System Design
This article serves as a comprehensive guide to building a real-time trading platform using Golang, Kafka, and Websockets. Let’s delve into the rationale behind our choice of components:
1. Golang: Golang’s direct compilation to machine code and its simplicity make it an ideal choice for low-latency systems. While debates exist about language preferences, Golang stands out for its efficiency and effectiveness in getting the job done.
2. Kafka: Despite the bidirectional capabilities of Websockets, we opt for Kafka for two key reasons. Firstly, Websockets solely facilitate data exchange without data storage capabilities. For timely financial data analysis, multiple services may be necessary. Secondly, Kafka’s distributed architecture aligns seamlessly with stock market platforms operating across various countries.
3. Websockets: The need for Websockets arises from Kafka’s default design for multi-host environments. In Kafka, a single partition cannot be subscribed to by more than one consumer. This poses challenges when multiple tabs of the same app are open, as each tab receives messages sequentially. Additionally, the scarcity of client-side libraries for Kafka, especially for the web, makes Websockets a more practical choice across various platforms.
4. Web App: For the web application, we’ve chosen ReactJS to craft an intuitive and responsive user interface. Leveraging the power of React components and its virtual DOM, we aim to create a seamless and interactive trading experience. The real-time data received from Kafka via Websockets will be efficiently rendered using React components.
Setup
Let’s kick off the implementation. We’ll utilize Binance’s open-source WebSocket connection to establish our data source. We’ll subscribe to the following tickers to receive real-time updates:
btcusdt,ethusdt,busdusdt,bnbusdt,ltcusdt,xrpusdt,maticusdt
Now that we have decided on our data source, it’s time to set up Kafka.
I will be using docker images from bitnami.
version: '3.8'
services:
zookeeper:
env_file:
- ./.env
image: bitnami/zookeeper
expose:
- "2181"
ports:
- "2181:2181"
kafka:
image: bitnami/kafka
env_file:
- ./.env
depends_on:
- zookeeper
ports:
- '9092:9092'
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
To commence, we’ll set up two essential services: Zookeeper and Kafka. For those unfamiliar with Zookeeper, it serves as a centralized cluster management system developed by Apache. Typically employed in distributed systems, Zookeeper plays a crucial role in addressing questions related to Kafka’s operation, including:
- Determining the broker responsible for handling the publish/subscribe functionality for a given topic and partition.
- Managing the count of nodes or server instances available in the cluster.
- Providing insights into available topics, data retention settings, and more.
While this provides a brief overview, you can delve deeper into Zookeeper’s functionalities here.
The question may arise: why deploy Zookeeper when, for testing and development purposes, we don’t necessarily require multiple nodes?
While Kafka can technically operate without Zookeeper, it’s essential to note that Apache does not recommend doing so in production environments. Hence, for consistency and best practices, we opt to incorporate Zookeeper from the outset, aligning with Apache’s guidelines for a robust and reliable setup.
Here is the ** .env file**
KAFKA_HOST=kafka
KAFKA_PORT=9092
#Zookeeper
ALLOW_ANONYMOUS_LOGIN=yes
ZOO_PORT_NUMBER=2181
#Kafka
KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
ALLOW_PLAINTEXT_LISTENER=yes
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
TICKERS="btcusdt,ethusdt,busdusdt,bnbusdt,ltcusdt,xrpusdt,maticusdt"
Proceed with the following command to initiate the setup:
sudo docker-compose up -d
These steps should suffice for configuring Kafka and Zookeeper in your local environment.
Conclusion
We’ve successfully laid the foundation for our real-time trading application by setting up Kafka and Zookeeper in our local environment. In the upcoming week, be on the lookout for Part 2 of this series, where we’ll delve into the integration of Golang with Kafka. Stay tuned for a deeper exploration of how these technologies synergize to create a robust and efficient real-time trading platform. Happy coding!
GitHub - Kamalesh-Seervi/Real-time-trade-app: Kafka, WebSockets
Top comments (0)