DEV Community

Cover image for OneDialect — Unified Assistive Communication System (UACS)
Anand P S
Anand P S

Posted on • Originally published at anandps.in

OneDialect — Unified Assistive Communication System (UACS)

A production-grade embedded system enabling communication across speech, text, Morse, and haptic signals within a single unified pipeline.

Project Links

Problem

Assistive communication systems are fragmented.

Most tools solve isolated problems such as speech, hearing, or vision, forcing users to:

  • Switch between multiple interfaces
  • Learn different interaction models
  • Depend on external assistance

The real failure is not missing features.

It is the absence of a unified system.

Solution

UACS introduces a single communication pipeline:

Speech ⇄ Text ⇄ Morse ⇄ Haptic

Morse code acts as a consistent encoding layer, enabling deterministic translation across all modalities.

System Architecture

MCU unit Development
The system is built using a dual-layer architecture:

Processing Layer (Raspberry Pi)

  • Speech capture
  • Speech-to-text conversion
  • Text processing
  • Morse encoding

Raspberry Pi

Interaction Layer (ATmega328P)

  • Morse input detection
  • Timing-based decoding
  • Haptic feedback control
  • Audio feedback
  • Bluetooth communication

ATmega328P based unit

Why this split?

  • Ensures real-time deterministic behavior in the interaction layer
  • Prevents OS-level variability from affecting timing-sensitive operations
  • Uses a dedicated MCU to handle sensor input, Morse timing, and haptic driver control with precise timing guarantees
  • Offloads computational tasks (speech processing, encoding) to the SoC (Raspberry Pi), avoiding overload on the interaction layer
  • Improves portability by keeping the MCU-based unit as a compact HMI device, while heavy processing remains external
  • Enables clear separation of concerns: compute-intensive vs time-critical execution

Operational Flow

Speech → Haptic Output

  1. Capture speech
  2. Convert to text
  3. Encode into Morse
  4. Transmit via Bluetooth
  5. Render as vibration patterns

Haptic Input → Speech

  1. User inputs Morse via SPDT switch
  2. Decode timing into text
  3. Convert to speech
  4. Output audio

Result

A full-duplex communication loop across speech and tactile interaction.

AVR vs SoC

Embedded Implementation

Built around the ATmega328P for real-time control:

  • Interrupt-driven Morse decoding
  • Timing-based dot/dash classification
  • PWM-controlled vibration output
  • UART-based Bluetooth communication
  • Battery-optimized operation

Hardware protections

  • Overcharge protection
  • Deep discharge protection
  • Short-circuit protection
  • Thermal protection

Hardware Stack

  • Raspberry Pi 4 Model B
  • ATmega328P microcontroller
  • HC-05 Bluetooth module
  • Linear Resonant Actuator (haptic motor)
  • Active buzzer
  • SPDT switch
  • Li-ion battery

Software and Control Logic

The system follows an event-driven, state-based architecture:

  • Input capture → decode → state transition
  • Morse segmentation using timing windows
  • Serial communication handling
  • Output scheduling

Processing pipelines

  • Speech-to-text
  • Text-to-Morse
  • Morse-to-text
  • Text-to-speech

Cost

Approximately ₹16,700 total.

  • Raspberry Pi is the primary cost driver
  • Remaining components are optimized for affordability

Validation

Tested under real-world conditions for:

  • Low-latency communication
  • Accurate Morse encoding and decoding
  • Stable Bluetooth transfer
  • Consistent haptic and audio feedback

Future Scope

  • Multi-language support
  • Wearable form factors
  • Emergency communication systems
  • IoT-integrated assistive devices

Closing

Assistive systems fail when they optimize isolated features instead of system behavior.

UACS solves this by treating communication as a unified, deterministic pipeline where speech, text, Morse, and haptic signals operate as one cohesive system.

— Anand P S

Top comments (0)