A production-grade embedded system enabling communication across speech, text, Morse, and haptic signals within a single unified pipeline.
Project Links
- Official Project Page: https://anandps.in/projects/unified-assistive-communication-system
- GitHub Repository: https://github.com/anand-ps/unified-assistive-communication-system
Problem
Assistive communication systems are fragmented.
Most tools solve isolated problems such as speech, hearing, or vision, forcing users to:
- Switch between multiple interfaces
- Learn different interaction models
- Depend on external assistance
The real failure is not missing features.
It is the absence of a unified system.
Solution
UACS introduces a single communication pipeline:
Speech ⇄ Text ⇄ Morse ⇄ Haptic
Morse code acts as a consistent encoding layer, enabling deterministic translation across all modalities.
System Architecture

The system is built using a dual-layer architecture:
Processing Layer (Raspberry Pi)
- Speech capture
- Speech-to-text conversion
- Text processing
- Morse encoding
Interaction Layer (ATmega328P)
- Morse input detection
- Timing-based decoding
- Haptic feedback control
- Audio feedback
- Bluetooth communication
Why this split?
- Ensures real-time deterministic behavior in the interaction layer
- Prevents OS-level variability from affecting timing-sensitive operations
- Uses a dedicated MCU to handle sensor input, Morse timing, and haptic driver control with precise timing guarantees
- Offloads computational tasks (speech processing, encoding) to the SoC (Raspberry Pi), avoiding overload on the interaction layer
- Improves portability by keeping the MCU-based unit as a compact HMI device, while heavy processing remains external
- Enables clear separation of concerns: compute-intensive vs time-critical execution
Operational Flow
Speech → Haptic Output
- Capture speech
- Convert to text
- Encode into Morse
- Transmit via Bluetooth
- Render as vibration patterns
Haptic Input → Speech
- User inputs Morse via SPDT switch
- Decode timing into text
- Convert to speech
- Output audio
Result
A full-duplex communication loop across speech and tactile interaction.
Embedded Implementation
Built around the ATmega328P for real-time control:
- Interrupt-driven Morse decoding
- Timing-based dot/dash classification
- PWM-controlled vibration output
- UART-based Bluetooth communication
- Battery-optimized operation
Hardware protections
- Overcharge protection
- Deep discharge protection
- Short-circuit protection
- Thermal protection
Hardware Stack
- Raspberry Pi 4 Model B
- ATmega328P microcontroller
- HC-05 Bluetooth module
- Linear Resonant Actuator (haptic motor)
- Active buzzer
- SPDT switch
- Li-ion battery
Software and Control Logic
The system follows an event-driven, state-based architecture:
- Input capture → decode → state transition
- Morse segmentation using timing windows
- Serial communication handling
- Output scheduling
Processing pipelines
- Speech-to-text
- Text-to-Morse
- Morse-to-text
- Text-to-speech
Cost
Approximately ₹16,700 total.
- Raspberry Pi is the primary cost driver
- Remaining components are optimized for affordability
Validation
Tested under real-world conditions for:
- Low-latency communication
- Accurate Morse encoding and decoding
- Stable Bluetooth transfer
- Consistent haptic and audio feedback
Future Scope
- Multi-language support
- Wearable form factors
- Emergency communication systems
- IoT-integrated assistive devices
Closing
Assistive systems fail when they optimize isolated features instead of system behavior.
UACS solves this by treating communication as a unified, deterministic pipeline where speech, text, Morse, and haptic signals operate as one cohesive system.



Top comments (0)