DEV Community

Guilherme Vassoller Daros
Guilherme Vassoller Daros

Posted on

How I Built a Intelligent AgTech Risk Monitoring System: Architecture, Technical Decisions, and Key Learnings

Introduction

This project was developed as part of the Hardware Architecture course at my university. Our team set out to build a simple yet powerful system where we could experiment with different technologies, sensors, and hardware components. It was our first hands-on experience working with sensors and embedded systems, so we aimed to create a solution that was fast, scalable, and accessible.

In this article, I walk through the project’s architecture, the technologies used, the challenges we faced, and the key lessons learned throughout the development process.


Project Overview

  • Main Features

    • Real-time monitoring: Continuously collects environmental data (temperature, humidity, and luminosity) from sensors connected to an Arduino.
    • Asynchronous data pipeline: Uses a message queue (RabbitMQ) to reliably transmit sensor readings for analysis and storage.
    • Risk analysis engine: Processes sensor data to compute risk levels for pest outbreaks, with multi-tier alert levels.
    • Dashboard interface: Interactive web dashboard built with Next.js that displays real-time and historical visualizations.
    • Scalable architecture: Designed with distributed components that can scale independently and adapt to multiple crop types.
  • Repository:


    https://github.com/guiDaros/project_agtech_hdwach


Tech Stack

  • Hardware

    • Arduino Uno with environmental sensors (DHT11, HW080, LDR)
    • Raspberry Pi (for backend services)
  • Backend

    • Python 3.8+ with the Flask framework
    • RabbitMQ (CloudAMQP) for asynchronous messaging
    • Redis (Upstash) for near real-time data caching
    • SQLite for historical data storage
    • PySerial to communicate with the Arduino
    • Pandas for data analysis
  • Frontend

    • Next.js (React framework) with TypeScript
    • Tailwind CSS and shadcn/ui for components
    • Recharts for interactive data visualizations
  • Tools

    • Git for version control
    • VSCode as the main editor

System Architecture

High-level system flow

Sensors → Arduino → Message Queue → Backend → Database → Frontend

entire sys image
Development setup during testing and validation.


Why this architecture was chosen

Asynchronous Messaging

RabbitMQ is used to decouple producers and consumers through asynchronous, message-driven communication. This allows system components to operate independently, remain resilient to failures, and support multiple processing paths such as real-time streaming, historical storage, and analytics. The same structure also enables future extensions, including machine learning services, without changing the ingestion layer.

Separation of System Layers

The system is organized into hardware, backend, and frontend layers, responsible for data acquisition, processing and storage, and visualization. This separation of concerns improves maintainability, simplifies testing, and allows components to evolve and scale independently.

Why RabbitMQ Instead of Direct HTTP

RabbitMQ was chosen over direct HTTP to handle the realities of a distributed agricultural environment, including network latency, intermittent connectivity, and partial failures. By providing buffering, reliable delivery, and asynchronous processing, the message broker ensures that sensor data is not lost and can be consumed at different rates by downstream services.


How data moves through the system

  • The system follows an event-driven, ELT-oriented data pipeline.

  • Sensor readings are collected by Arduino-connected devices and sent to the backend via a serial connection, where they are published to RabbitMQ as raw events. The data remains unprocessed at this stage to preserve its original form and avoid coupling ingestion with transformation.

  • RabbitMQ acts as the ingestion layer, routing messages to dedicated queues for different consumers. Raw data is then loaded into storage, with SQLite used for historical persistence and Redis for fast access to recent readings.

  • Downstream analysis services consume messages from the queues, process sensor data, and compute derived metrics such as environmental risk indicators for pest and fungus development. These refined results are then delivered to a React and Next.js frontend, which renders real-time and historical dashboards.


Challenges and How We Addressed Them

Ensuring Reliable Sensor Data Ingestion

  • One of the main challenges was designing a reliable ingestion mechanism for sensor data in a distributed environment. Sensor readings are generated continuously and originate from hardware components that are inherently prone to noise, temporary failures, and unstable communication.

  • To address this, the system was designed around asynchronous messaging. Instead of tightly coupling data producers and consumers through direct communication, sensor readings are published as events to RabbitMQ. This approach allows data to be buffered, retried, and processed independently of ingestion, increasing fault tolerance and system resilience.

  • By decoupling hardware data acquisition from downstream processing, the system remains operational even when individual services become temporarily unavailable.

Balancing Real-Time Processing and Analytical Flexibility

Another challenge was designing a solution that could handle both immediate real-time visualization and future analytical needs without major changes.

By preserving raw data and separating different types of consumption, the system ensures flexibility. Dashboards, historical analysis, or predictive models can be added or improved independently, without disrupting existing processes.


Results

  • The system successfully integrated hardware sensors, asynchronous data ingestion, backend processing, and a web-based dashboard into a single working solution.

  • Sensor readings were collected, transmitted, processed, and visualized in near real time, enabling continuous monitoring of environmental conditions. Historical data was also stored and accessed for retrospective analysis.

  • The project was demonstrated in a live presentation setting, where the complete data flow—from sensor acquisition to dashboard visualization—was executed in real time. This validation confirmed the correctness of the system integration and the architectural decisions made during development.


Future Work

  • Future iterations of this project will focus on extending analytical capabilities and improving data processing quality. One of the next planned steps is the development of an initial predictive model to analyze historical sensor data and estimate the likelihood of pest or disease outbreaks. This first model will serve as an experimental foundation for more advanced predictive approaches.

  • In parallel, ongoing work is being done to improve the data pipeline itself, with a focus on increasing processing capacity, filtering noisy sensor readings, and producing more reliable inputs for analysis and visualization. These improvements aim to enhance the quality of results without altering the overall system architecture.

  • Additional enhancements may include integrating new sensors, refining alert mechanisms, and adapting the system for larger-scale or more distributed deployments.


Conclusion

This project provided practical experience in designing and implementing a distributed, event-driven system that integrates hardware, backend services, and a modern web interface.

Beyond the technical implementation, the project reinforced the importance of architectural decisions such as decoupling, data flow design, and system modularity. Working with real sensor data highlighted the challenges of handling data at the boundary between hardware and software.

Overall, the project served as a valuable learning experience in applied system design, bridging concepts from hardware architecture, data engineering, and web development.

Top comments (0)