You describe how it feels. It draws what it looks like.
There's a question I've been sitting with for a while:
What does a song look like?
Not the album art. Not the waveform. The feeling of it. The way Gymnopédie No.1 feels like fog over cold water. The way Bohemian Rhapsody feels like architecture that shouldn't work but does.
Spotify tried to answer this with their audio features API — energy, valence, danceability, tempo. Machine-derived floats between 0 and 1. They deprecated it last year.
I think they were solving the wrong problem anyway.
Your ears are the API
Spotify's audio analysis is about the signal. I'm interested in the experience.
So I built Song Portrait — a tool where you are the data source.
You describe a song across 8 perceptual dimensions:
- Energy — soft and still, or electric and alive?
- Mood — sitting in shadows, or reaching for light?
- Tempo Feel — floating, or driven?
- Texture — smooth like glass, or rough like gravel?
- Warmth — cold and distant, or warm and close?
- Complexity — a single thread, or layered and intricate?
- Space — intimate, like you're in the room, or vast, like standing outside at night?
- Memory — something new, or something that pulls at decades?
Move the sliders. The portrait draws itself.
What it draws
Each set of values generates a spiral DNA structure — a pair of intertwining helices whose properties are all derived from what you felt:
- Number of strands: driven by Complexity (2–6)
- Number of turns: driven by Tempo Feel (floaty vs. driven)
- Radius and bloom: driven by Space and Energy
- Color palette: driven by Warmth and Mood
- Noise and distortion: driven by Texture
- Rung density (the cross-links between strands): driven by Complexity
- Vertical tilt: driven by Mood (sad songs droop, bright songs lift)
- Film grain overlay: driven by Memory — nostalgic songs get a grainy fog
The result is different for every song. And every person's description of the same song.
Two people. Same song. Different portraits.
That's the part I find most interesting.
If you and I both describe "Black" by Pearl Jam, we might both agree it's low-energy, very dark, heavy on memory. But you might feel more complexity in the guitar work; I might feel more space in the production. Our portraits will be similar but not identical.
The portrait isn't a fact. It's a record of how you heard it.
That feels more honest than a database entry.
Try the presets
I built in six presets to get you started:
| Song | Notable shape |
|---|---|
| Gymnopédie No.1 – Satie | Few strands, wide open, cool, grainy |
| Bohemian Rhapsody – Queen | Max complexity, high energy, maximum turns |
| Mr. Brightside – The Killers | Fast, bright, high memory |
| Africa – Toto | Warm, complex, nostalgic — my personal favorite result |
| Black – Pearl Jam | Dark, slow, spacious, deeply nostalgic |
| Blue (Da Ba Dee) – Eiffel 65 | High energy, fast, cold, clean |
Each portrait is animated — the helix rotates slowly, driven by Tempo Feel. You can download any frame as a PNG.
Why I built this
I've been building a series of tools I call "useless but beautiful" — things that don't help you do anything, but reveal something.
Noise Field makes particle art from invisible forces. Breathing Clock encodes the time into organic forms. Code Scent translates your codebase into landscape.
Song Portrait continues that logic: take something invisible — the subjective experience of music — and give it a visible form.
I don't know if the portraits are "accurate." I'm not sure accuracy is the right standard for something that lives in the gap between sound and feeling.
What I do know: when I loaded the Satie preset and saw thin blue-grey spirals drifting in cold space, it felt right.
Free. No account. Runs entirely in your browser.
If you describe a song and like how it looks, I'd genuinely want to see it — reply here or find me @Clavis_Citriac.
Top comments (0)