Early of 2026, I want to try the AI capabilities of building app using unknown tech stack.
I am going to start with Rust.
I have never coded in Rust before.
It's interesting to see how far AI can do to build something beyond just CRUD. With all good software spec, CRUD operation can be done easily by AI agent, but how about a bit more complex operation like compression, custom algorithm, or something that requires complex math operation.
In this writing, I am going to journal my experiment building audio noise reducer. It's only a simple noise reducer operation without AI/ML, so never expect that the output will be very clean.
Step
If we are a domain expert then it's good, we can define the step, flow, the algorithm that will be used.
But here, I spent sometime researching using Claude and some paper regarding noise reduction on audio format.
Here's the AI said.
Basic Process
1. Analysis Phase
The algorithm first analyzes the audio to understand what constitutes "noise" versus "signal." This often happens during silent passages or a dedicated noise profile section where only background noise is present. The algorithm builds a spectral fingerprint of the noise frequencies.
2. Frequency Domain Conversion
The audio is transformed from time domain (waveform) into frequency domain using techniques like Fast Fourier Transform (FFT). This breaks the sound into its component frequencies, making it easier to identify and isolate noise patterns.
3. Noise Identification
The algorithm compares the current audio spectrum against the noise profile. It identifies which frequencies match the noise characteristics and which contain the desired signal. Common noise patterns include hiss, hum, or environmental sounds with consistent spectral signatures.
4. Spectral Subtraction
The identified noise frequencies are subtracted or attenuated from the overall signal. The algorithm reduces the amplitude of frequencies matching the noise profile while preserving frequencies containing speech or music.
5. Smoothing and Refinement
To avoid artifacts like musical noise (random twinkling sounds), the algorithm applies smoothing across time and frequency. This might involve gain reduction that varies gradually rather than abruptly.
6. Reconstruction
The processed frequency data is converted back to time domain audio using inverse FFT, producing the cleaned output signal.
Initially, I want to build the whole thing from scratch using Rust, but I think it will waste of time and waste my token. So here're the libraries Claude suggested.
For FFT operations:
rustfft - Fast Fourier Transform implementation
realfft - Optimized for real-valued signals (typical for audio)
For audio I/O:
hound - Reading/writing WAV files
cpal - Real-time audio input/output
For math/signal processing:
ndarray - Multi-dimensional arrays
num-complex - Complex number operations
High-level concept
Read audio and convert to mono, float.
Frame the signal into overlapping windows.
For each frame:
Apply window (Hann/Hamming).
FFT → magnitude and phase.
Estimate/track noise magnitude spectrum.
Subtract noise spectrum with some safety heuristics.
Recombine modified magnitude with original phase.
iFFT to time domain.
Overlap‑add frames to reconstruct the enhanced signal.
Optionally apply simple post‑processing (clipping, normalization).
After that, I can input this as a context and let the AI creates a plan and the implementation.
Definitely, I wont be one-shot, it will include multiple feedback and more alignment from the user.
Model and Agent
Everything was built using Claude Code with GLM-4.7 as custom model.
Total cost to build this App
Total cost: $18.41 (costs may be inaccurate due to usage of unknown models)
Total duration (API): 1h 27m 6s
Total duration (wall): 5h 42m 37s
Total code changes: 4954 lines added, 1987 lines removed
Usage by model:
glm-4.5-air: 27.5k input, 6.0k output, 86.2k cache read, 0 cache write ($0.1981)
glm-4.7: 1.3m input, 166.2k output, 39.1m cache read, 0 cache write ($18.22)
Repo
Check out my experiment here

Top comments (0)