DEV Community

Cover image for I Built an Open-Source VST Plugin for Real-Time AI Music Generation
innermost47
innermost47

Posted on

I Built an Open-Source VST Plugin for Real-Time AI Music Generation

Back in May, I started with a simple idea: what if musicians could use AI to generate music in real-time during live performance, not to replace creativity, but as another instrument?

That experiment became OBSIDIAN Neural, an open-source VST3 plugin for AI-powered music generation.

What It Does

Load it in your DAW (Ableton, Bitwig, FL Studio, etc.), type a few keywords like "deep bass loop" or "ambient pad", and it generates audio in real-time. You can:

  • Control it with MIDI hardware during live performance
  • Layer AI-generated sounds with your synths and guitars
  • Generate loops on-the-fly and mix them together

The human stays in control - you decide what to generate, when, and how to mix it. AI generates, you orchestrate.

The Tech Stack

Plugin: C++ with JUCE framework (my first major C++ project!)
Backend: Python FastAPI + Stable Audio for generation

The Journey

What started as a weekend experiment turned into:

  • 135+ stars on GitHub
  • International media coverage (even articles in China and Japan!)
  • A demo presentation at AES AIMLA 2025 in London
  • Live performances mixing the VST with hardware synths

The code has been open source from day one because I believe in tools that empower musicians, not replace them.

Want to Try It?

Check out the project on GitHub: OBSIDIAN Neural

It works on macOS, Windows, and Linux. The macOS version is now properly signed.

Whether you're into live performance, sound design, or just curious about AI audio tools, feel free to give it a try and let me know what you think!


What's your take on AI in music production? Tool or threat? Let's discuss in the comments!

Top comments (0)