DEV Community

Van Huy Pham
Van Huy Pham

Posted on

Stop Buying Stock Audio: How I Generated Original BGM for My App (0 to 1 Guide)

#ai

As a full-stack developer, I am comfortable with almost every part of the product lifecycle. I can spin up a database, write the backend logic, and design a decent frontend. But when it comes to creating the creative assets—specifically background music (BGM) for demo videos or game prototypes—I hit a wall.
I usually have two choices: pay a monthly subscription for a stock audio library or spend hours scouring the internet for "Copyright Free" tracks that don’t sound terrible.
Recently, I decided to try a third option: generating the audio myself using AI. I have zero music theory knowledge. I don't know what a chord progression is. Yet, after a weekend of tinkering, I managed to create a custom, royalty-free track that perfectly fit my project's vibe.
Disclaimer: The tools and resources mentioned in this article are simply part of my personal experiment stack. I have no commercial cooperation with them; I’m just sharing what worked for my workflow.

What Is Generative Audio? ( The Tech Bit)

Before we dive into the "how-to," let's look at the "what." As developers, we are used to LLMs (Large Language Models) predicting the next token in a sequence of text. Generative audio works on a similar principle but with much higher complexity.
Most modern models operate by analyzing raw waveforms or spectrograms (visual representations of the spectrum of frequencies of a signal as it varies with time). The AI learns the mathematical patterns that make "Jazz" sound like Jazz or "Techno" sound like Techno.

Why Generative Audio for Developers?

Why bother generating it when you can buy it?

  1. Copyright Safety: Most generative audio for developers offers clear licensing. You created it, so you don’t have to worry about a YouTube copyright strike on your demo day.
  2. Dynamic Length: Need exactly 14 seconds of audio for a loading screen? You can generate exactly that, rather than chopping up a 3-minute song.
  3. Cost: Many AI tools for developers in this space offer free tiers that are sufficient for indie projects.

My Prompting Strategy: Speaking "Music"

The hardest part of this process wasn't finding a tool; it was learning how to talk to it. If you treat an AI music generator tutorial like a coding doc, you'll fail. You need to treat it like an art brief.
Through trial and error, I found a formula that yields the best results. You need to define three layers in your prompt:
The Container (Genre): Lo-fi, Cinematic, Synthwave, EDM.
The Texture (Mood): Melancholic, Uplifting, Aggressive, Deep focus.
The Ingredients (Instruments): Heavy bass, airy pads, acoustic guitar, 808 drums.
For example, if you are looking for a synthwave AI prompt guide, don't just type "Cyberpunk music."
Try this instead: "Synthwave, driving bassline, 120 BPM, futuristic city atmosphere, neon vibes, analog synthesizer, no vocals."

The Experiment: Creating the Track

For my specific use case, I needed a background track for a coding time-lapse video I was posting to Twitter. I wanted something chill but not sleepy.
I didn't want to install local libraries (like AudioLDM) because my GPU was already busy, so I looked for a browser-based solution. I decided to test FreeMusic AI for this specific run because the interface seemed cleaner than the Discord-based alternatives.
Here is how the creation process actually felt:
Instead of fiddling with knobs, I simply focused on the text description. I typed in: "Lo-fi hip hop beat, relaxing, coffee shop vibes, vinyl crackle, soft piano melody, medium tempo."
I hit generate. The system took about the same amount of time as a quick NPM install.
The first result was okay, but a bit too slow. This is where the "iteration" mindset comes in. I didn't change the tool; I changed the prompt. I added "slightly upbeat" and "jazzy drums" to the text. The second iteration was spot on. It had a nice loopable quality that is essential for background video content.
I didn't have to learn a DAW (Digital Audio Workstation) or understand mixing. I just described what I heard in my head, and the algorithm rendered it.

Post-Processing and Integration

Once I downloaded the track, I did one final step that I recommend to everyone reading this how to create background music with AI guide.
Raw AI audio can sometimes end abruptly. I used Audacity (an open-source audio editor) to add a 2-second "Fade In" at the start and a "Fade Out" at the end. This makes the audio feel professional rather than cut off.
Pro Tip: If you are building a web app, convert your output WAV file to OGG or MP3 to save bandwidth before embedding it.

Practical Use Cases for Developers

Since that first experiment, I’ve started using this workflow for:
Game Jams: creating distinctive themes for different levels.
App Notifications: Generating short, 2-second "success" chimes.
Focus Mode: I generated a 10-minute "Pink Noise with rain" track that I use personally while coding.

Final Thoughts

We are in a golden age of creative tooling. Just as GitHub Copilot helps us write code faster, an AI Music Generator helps us fill the creative gaps in our projects without needing a budget for a composer.
It’s not about replacing musicians; it’s about unblocking developers. If you have been stuck using generic stock assets, I highly recommend spending 15 minutes trying to generate your own. It’s surprisingly satisfying to say, "Yeah, I made the code and the music."
Have you experimented with audio generation in your projects? Let me know your favorite prompts in the comments!

Top comments (0)