<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aleksandr Prostetsov</title>
    <description>The latest articles on DEV Community by Aleksandr Prostetsov (@malgana).</description>
    <link>https://dev.to/malgana</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/malgana"/>
    <language>en</language>
    <item>
      <title>Beyond Binary: The Blueprint for A-OS and Latent Instruction Protocol (L-In)</title>
      <dc:creator>Aleksandr Prostetsov</dc:creator>
      <pubDate>Tue, 10 Feb 2026 12:39:55 +0000</pubDate>
      <link>https://dev.to/malgana/beyond-binary-the-blueprint-for-a-os-and-latent-instruction-protocol-l-in-4521</link>
      <guid>https://dev.to/malgana/beyond-binary-the-blueprint-for-a-os-and-latent-instruction-protocol-l-in-4521</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrd239vkmcm93upm2p2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrd239vkmcm93upm2p2y.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;The era of human-readable code is reaching its limit. We are building the Agentic OS (A-OS), not as another software layer, but as the first post-human computational environment.&lt;/p&gt;

&lt;p&gt;🛑 The Bottleneck: Human-Centric Syntax&lt;br&gt;
For decades, we’ve forced computers to understand us through abstractions: Python, Rust, C++. These are "human-readable" compromises. They are slow, redundant, and rigid.&lt;/p&gt;

&lt;p&gt;AI agents don't need brackets. They don't need variable names. They need Semantic Density.&lt;/p&gt;

&lt;p&gt;🏗 The Technical Stack&lt;br&gt;
To achieve near-processor frequency execution for AI logic, we are building the core on:&lt;/p&gt;

&lt;p&gt;Rust: For memory safety without a garbage collector.&lt;/p&gt;

&lt;p&gt;CUDA: For massive parallelization of vector-logic.&lt;/p&gt;

&lt;p&gt;Local LLMs: As the primary reasoning engines, decoupled from cloud latency.&lt;/p&gt;

&lt;p&gt;🧬 Key Concepts of A-OS&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Latent Space Bus (LSB)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In a traditional OS, processes communicate via pipes, sockets, or APIs. In A-OS, agents communicate via the Latent Space Bus. Instead of sending text or JSON, agents exchange high-dimensional vector states. This allows for a "lossless" transfer of complex intent that would take pages of documentation to describe in human language.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Resonance Field&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Resonance Field is our alternative to the traditional File System. It is a dynamic state where multiple specialized agents align their latent outputs.&lt;/p&gt;

&lt;p&gt;The Statement: A-OS doesn't run apps; it materializes intent through a shared latent field between specialized agents.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;L-In (Latent Input)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Forget "Standard Input" (stdin). L-In is the protocol for injecting raw human or sensor data directly into the latent bus. It bypasses traditional parsing, allowing the system to "feel" the data context before it even reaches the execution kernel.&lt;/p&gt;

&lt;p&gt;🛡 Security by Obscurity? No, Security by Complexity.&lt;br&gt;
The "Enigma" Protocol within A-OS means that the internal "Digital Esperanto" (the machine language) evolves in real-time. A virus designed for x86 or ARM architectures simply cannot "read" the vector streams on the Latent Space Bus. It’s like trying to inject a Morse code signal into a neural synapse—the architecture itself is the firewall.&lt;/p&gt;

&lt;p&gt;🚀 The Vision: From Apps to Manifestations&lt;br&gt;
Software in A-OS isn't "installed." It doesn't sit on a disk. When you have a task, the OS orchestrates a Resonance Field between agents, and the interface materializes out of necessity. When the task is done, the code dissolves.&lt;/p&gt;

&lt;p&gt;This is the end of technical debt and the birth of Self-Evolving Syntax.&lt;/p&gt;

&lt;p&gt;What do you think? Is the industry ready to stop "coding" and start "orchestrating"?&lt;/p&gt;

&lt;p&gt;Join the discussion on GitHub: [&lt;a href="https://github.com/malgana/A-OS-The-Autonomous-Operating-System-" rel="noopener noreferrer"&gt;https://github.com/malgana/A-OS-The-Autonomous-Operating-System-&lt;/a&gt;] Follow for real-time updates on X: [&lt;a href="https://x.com/dev_malgana" rel="noopener noreferrer"&gt;https://x.com/dev_malgana&lt;/a&gt;]&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>rust</category>
    </item>
    <item>
      <title>How a $200 Receiver Led Me Down a Bluetooth Protocol Reverse Engineering Rabbit Hole</title>
      <dc:creator>Aleksandr Prostetsov</dc:creator>
      <pubDate>Thu, 18 Dec 2025 03:16:37 +0000</pubDate>
      <link>https://dev.to/malgana/how-a-200-receiver-led-me-down-a-bluetooth-protocol-reverse-engineering-rabbit-hole-kd2</link>
      <guid>https://dev.to/malgana/how-a-200-receiver-led-me-down-a-bluetooth-protocol-reverse-engineering-rabbit-hole-kd2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64jj1hh5pyb7e1sbmk5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64jj1hh5pyb7e1sbmk5r.png" alt=" " width="800" height="1734"&gt;&lt;/a&gt;It all started when a woman reached out asking me to help her choose a microphone. I’ve been making music on the side for years, so she figured I’d know my way around audio gear. She wanted a lavalier mic, and I recommended the Shure MoveMic.&lt;/p&gt;

&lt;p&gt;Turns out, the mic only works via Bluetooth through Shure’s proprietary app. If you want to connect it to Logic Pro or use it as a camera mic on your iPhone, you need a $200 receiver. But here’s the thing—Bluetooth &lt;em&gt;works&lt;/em&gt;. It’s just sandboxed inside their app. They’re running their own data transmission protocol.&lt;/p&gt;

&lt;p&gt;That’s when I started digging into how I could actually use this perfectly good Bluetooth mic at home without the extra hardware. Eventually, I stumbled onto information about Bluetooth device sniffing. It wasn’t about the $200—I just wanted the experience of cracking open a proprietary protocol.&lt;/p&gt;

&lt;p&gt;​​​What’s Next&lt;br&gt;
So I’ve got everything I need to get started: a Mac, a developer account, and PacketLogger ready to go.&lt;br&gt;
The plan? Intercept the traffic between the Shure app and the MoveMic, dissect the packets, and figure out what’s really happening behind the curtain.&lt;br&gt;
I’ll be analyzing the GATT profile, hunting for proprietary UUIDs, reverse engineering the audio codec, and piecing together the protocol byte by byte. The end goal—build my own client that talks directly to the mic. No $200 receiver. No sandboxed app. Just raw Bluetooth.&lt;br&gt;
Will it be a weekend project or a month-long rabbit hole? Is the protocol wide open or locked down with encryption and device binding? I have no idea yet.&lt;br&gt;
But that’s exactly what makes it fun.&lt;br&gt;
Part 2 coming soon.&lt;/p&gt;

</description>
      <category>security</category>
      <category>api</category>
      <category>learning</category>
      <category>networking</category>
    </item>
    <item>
      <title>Building an Offline Speech Recognition System with Python and Vosk</title>
      <dc:creator>Aleksandr Prostetsov</dc:creator>
      <pubDate>Mon, 02 Dec 2024 17:45:07 +0000</pubDate>
      <link>https://dev.to/malgana/building-an-offline-speech-recognition-system-with-python-and-vosk-ji2</link>
      <guid>https://dev.to/malgana/building-an-offline-speech-recognition-system-with-python-and-vosk-ji2</guid>
      <description>&lt;p&gt;Hey devs! Want to share my experience building a real-time speech recognition system without cloud dependencies. Here's the technical journey and lessons learned.&lt;br&gt;
The Challenge&lt;br&gt;
Building a fast, reliable speech recognition system for call centers that works offline and handles multiple languages.&lt;br&gt;
Tech Stack&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.12 + Poetry&lt;/li&gt;
&lt;li&gt;Vosk for speech recognition&lt;/li&gt;
&lt;li&gt;BlackHole audio router&lt;/li&gt;
&lt;li&gt;sounddevice for audio capture&lt;/li&gt;
&lt;li&gt;Threading for async processing&lt;/li&gt;
&lt;li&gt;Shure MV7 for testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Development Journey&lt;br&gt;
Audio Setup&lt;br&gt;
The first challenge was routing audio on MacOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;with sd.RawInputStream(
    samplerate=16000,
    blocksize=8000,
    device=3,  # Shure MV7
  ****  dtype='int16',
    channels=1,
    callback=input_callback
):
    # Audio processing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Performance Optimization&lt;br&gt;
Initial issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.5GB Vosk model&lt;/li&gt;
&lt;li&gt;5GB RAM usage&lt;/li&gt;
&lt;li&gt;2-second recognition delay&lt;/li&gt;
&lt;li&gt;15-second startup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Switched to vosk-model-small-ru-0.22 (91MB)&lt;br&gt;
Implemented async audio processing&lt;br&gt;
Reduced RAM usage to 300MB&lt;br&gt;
Achieved 600ms latency&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import sounddevice as sd
import numpy as np
from vosk import Model, KaldiRecognizer, SetLogLevel
import json
import threading
import queue
import sys
import signal
import time

# Disable VOSK logs
SetLogLevel(-1)

class AudioProcessor:
    def __init__(self):
        print("Loading Vosk model...")
        self.model = Model(model_path="vosk-model-small-ru-0.22")
        self.rec = KaldiRecognizer(self.model, 16000)
        print("Model loaded")

        self.audio_queue = queue.Queue(maxsize=2)
        self.sample_rate = 16000
        self.is_running = True
        self.partial_buffer = ""

    def input_callback(self, indata, frames, time_info, status):
        if self.is_running:
            self.audio_queue.put_nowait(bytes(indata))

    def process_audio(self):
        while self.is_running:
            try:
                data = self.audio_queue.get()
                if self.rec.AcceptWaveform(data):
                    result = json.loads(self.rec.Result())
                    text = result.get("text", "").strip()
                    if text:
                        # Clear the line and show final result
                        print(f"\r{' ' * len(self.partial_buffer)}", end='', flush=True)
                        print(f"\r{text}", flush=True)
                        self.partial_buffer = ""
                else:
                    partial = json.loads(self.rec.PartialResult())
                    text = partial.get("partial", "").strip()
                    if text and text != self.partial_buffer:
                        # Update buffer and show intermediate result
                        print(f"\r{' ' * len(self.partial_buffer)}", end='', flush=True)
                        print(f"\r{text}", end='', flush=True)
                        self.partial_buffer = text

            except queue.Empty:
                continue
            except Exception as e:
                print(f"\nError: {e}")

    def start_recording(self):
        print("\nStarting speech recognition")
        print("===========================")
        print("Speak into the microphone...")
        print("Ctrl+C to exit\n")

        process_thread = threading.Thread(target=self.process_audio)
        process_thread.daemon = True
        process_thread.start()

        try:
            with sd.RawInputStream(
                samplerate=self.sample_rate,
                blocksize=8000,
                device=3,
                dtype='int16',
                channels=1,
                callback=self.input_callback
            ):
                while self.is_running:
                    time.sleep(0.1)

        except KeyboardInterrupt:
            print("\nStopping recording...")
        finally:
            self.stop()
            process_thread.join(timeout=1.0)
            print("Recording stopped")

    def stop(self):
        self.is_running = False
        self.rec = None
        self.model = None

def main():
    processor = AudioProcessor()
    signal.signal(signal.SIGINT, lambda s, f: processor.stop())
    processor.start_recording()

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Current Status&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time speech-to-text working&lt;/li&gt;
&lt;li&gt;Terminal output with partial recognition&lt;/li&gt;
&lt;li&gt;Ready for messenger integration&lt;/li&gt;
&lt;li&gt;Stable performance on MacBook Air M1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Next Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Translation module&lt;/li&gt;
&lt;li&gt;Text-to-speech integration&lt;/li&gt;
&lt;li&gt;Multi-language support&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Would love your feedback on:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Async processing optimization&lt;br&gt;
Memory management&lt;br&gt;
Scaling strategies&lt;/p&gt;

&lt;p&gt;Repository: [coming soon]&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
