Listen to Your Cloud: Co-Developing a CloudTrail Sonifier with an AI Partner
Back in 2010 I won a Duke's Choice Award at JavaOne for Log4JFugue, a system that converted log4j output into music streams. The core idea was simple: just as an auto mechanic can listen to a car and hear what's wrong, developers should be able to listen to their applications. You'd map your program's key verbs (create, process, destroy, error) to instruments like bass drum, snare, and cymbal crash, count occurrences in one-second buckets, and generate a chord per second. Busy seconds sounded thick. Quiet seconds sounded thin. Errors sounded wrong. You could literally hear your application's health while doing other work. (shoutout to David Koelle, the creator of JFugue, the technology underlying Log4JFugue).
That was sixteen years ago and the project has been on the shelf for a while. Recently I started wondering what it would look like to apply the same concept to AWS CloudTrail logs. Not log4j lines from a single application, but the firehose of API calls across an entire AWS account. I decided to find out, and I decided to do it by co-developing the system with Claude. What followed was one of the most interesting pair-programming sessions I've had, and a real education in what AI-assisted development actually looks like in practice.
Starting from the Idea
I gave Claude the context: JFugue was a Java library for programmatic music creation, Log4JFugue used it to sonify log files, and I wanted something similar for CloudTrail. Could it build a Python program that does a tail-dash-follow on CloudTrail events?
Within a minute I had a complete first version. It mapped AWS services to General MIDI instruments (EC2 got piano, S3 got marimba, IAM got trumpet), classified API actions into pitch ranges by their CRUD nature, added dissonant intervals for error events, and even hashed source IPs to stereo pan positions. The code was well-structured, properly documented, and showed a real understanding of the musical concepts behind the original project. A strong start.
It also didn't work at all.
The Debugging Dance
What followed was a series of increasingly specific problems that we worked through one at a time. The first was boring: AWS credentials weren't configured. Claude walked through the options (aws configure, environment variables, SSO) and noted the minimum IAM policy needed. Fair enough.
The second was more interesting: the program ran fine but produced no sound. This is the classic MIDI trap and I'll admit I should have seen it coming. The original code used the mido library to send MIDI messages, but MIDI messages are just instructions. Without a synthesizer listening on the other end, you get silence. Claude proposed adding a sounddevice backend that synthesized audio directly using numpy waveforms, no MIDI routing required. That was the right call. It also added a --test flag that plays a C major scale on startup so you can verify audio works before waiting for CloudTrail events. Small thing, huge time saver.
Then we hit the CloudTrail delivery delay problem. I was generating events but seeing nothing. Turns out CloudTrail's lookup_events API has a 5 to 15 minute delivery delay from when an event occurs to when it shows up in the API. Our initial 2-minute lookback window was missing everything. Claude widened it to 20 minutes. Problem solved, but this was the kind of thing where my AWS experience (I've been working with CloudTrail for years) and Claude's ability to quickly restructure the code made for a good partnership. I knew the problem, Claude implemented the fix in seconds.
Getting the Music Right
Once events were flowing and audio was working, we moved to the part I actually cared about: making it sound right.
The first version played events sequentially, one note per event. This was fundamentally wrong. In Log4JFugue, the whole point was the chord-per-second model. All events within a one-second window get stacked into a single chord. You hear density. Fifteen API calls in one second produces a thick, rich chord. One lonely DescribeTable produces a single thin tone. The difference is immediately perceptible, and that perceptual bandwidth is the whole reason sonification works.
I explained this to Claude and it restructured the entire architecture around a ChordBucket data class. Events get grouped by timestamp, deduplicated pitches form the chord, and repeated occurrences of the same event drive up velocity instead of adding more notes. This was a substantial rewrite and it got it right on the first pass.
Then came the error sounds. I asked Claude to make errors more prominent. It went for it: minor seconds, tritones ("the devil's interval" as it noted), square wave timbres, noise bursts, a 55 Hz bass rumble, and error chords that rang 50% longer than normal chords. I ran it and nearly fell out of my chair. "Can you dial back the error effect about half? We want people to notice the error without giving them a heart attack." Claude's response: let's make it an alert, not a cardiac event. It dialed everything back, and I asked it to print the actual error messages alongside the musical output. Now you hear the dissonance and can glance over to see "s3.GetObject: AccessDenied" right there in the terminal.
The Timing Problem
The trickiest issue was pacing. After switching to 60-second poll intervals (to stop getting throttled by CloudTrail's API rate limits), we had a new problem: 20 seconds of music followed by 40 seconds of dead silence. The program was playing all the chords as fast as possible, then sleeping until the next poll. That's not ambient monitoring, that's morse code.
The fix was to stretch each chord's duration to fill the entire poll interval. Twenty chords across 60 seconds means each chord sustains for 3 full seconds, flowing directly into the next. This also created a nice emergent property: busy intervals with many events produce rapid-fire chord changes, while quiet intervals produce long sustained drones. The pace of the music now encodes the activity level, not just the chord thickness. That's actually better than what Log4JFugue did.
What I Learned About AI Co-Development
This project took a single extended conversation. The system went from concept to working, audible, properly-paced CloudTrail sonification through maybe a dozen iterations. Some observations from the process:
Claude is an excellent first-draft generator and a fast refactorer. The initial code was structurally sound even if it didn't work out of the box. When I described what needed to change, the changes came fast and were usually right.
Domain knowledge still matters enormously. I knew about the CloudTrail delivery delay, about the chord-per-second model being essential, about MIDI needing a synthesizer. Claude didn't volunteer any of these things. But once I identified the issue, it could fix it faster than I could have.
The back-and-forth is the whole point. This wasn't me typing a prompt and getting a finished product. It was a genuine iterative development process: try it, find what's wrong, describe the problem, get a fix, try again. The pattern is much closer to pair programming than it is to code generation.
And sometimes you have to say "go back." We went down a path trying to eliminate tiny audio gaps between chords by pre-rendering and threading. It added complexity without solving the actual problem (which turned out to be the pacing model, not the gap). I asked Claude to revert to the simpler version and we took a different approach. That's a normal part of development, and it worked fine here too.
If you want to try it yourself, you need boto3, sounddevice, and numpy. Point it at an AWS account with some activity and listen. After a few minutes you'll start to develop an intuition for what "normal" sounds like. And when something goes wrong, you'll hear it. That was the whole point of Log4JFugue, and it turns out the idea translates to the cloud just fine.
Top comments (0)