DEV Community

Matt Hogg
Matt Hogg

Posted on • Originally published at matthogg.fyi on

Wherein I Find Myself Concerned About Sparkles

This trend of using sparkles (✨) as the visual metaphor when slapping “AI” onto our products has been bothering me lately. It’s not just sketchy to pass off large language models (LLMs) as something magical. There are also deeper implications here about how design plays into Big Tech’s desired narratives.

You’ve seen it, I’m sure. Every company under the sun has aligned on the sparkles to represent their “AI” features: OpenAI, Apple, Google, Notion, Miro, Atlassian, Spotify, Adobe, Grammarly, Zoom, Wix, and more. All the cool kids are doing it.

The emoji itself is decades old and is basically a cultural export of Japanese mobile provider NTT Docomo. Ever since it’s proven to be surprisingly flexible and adaptable—used to convey a wide variety of sentiments like sarcasm, emphasis, positivity, delight, wonder, and magic. Somewhere between 2016 and 2020 this “magic” symbolism was first applied to tech products and that’s where we find ourselves today.

“Dad, why do we use sparkles as the icon for AI?”
“Because ChatGPT came out and designers were amazed, so they chose an icon to show how amazed they were.”
“So the icon isn’t about the AI, but people’s reaction to it?”
“Yep.”
“What about when the next amazing thing comes out?”

Anthony Hobday

Behind this visual decoration there are still computers doing things for us, albeit inside a non-deterministic black box. It’s fine to be excited about new technology but not at the expense of fully understanding what it can and can’t do. It’s a shame because “AI” is quite good at some things but saving the world is not one of them. Unfortunately capitalism requires outsized and unending growth rather than modest gains over time. As Jordan Rothe puts it:

In short, people expect AI to be magic. While it’s a truly remarkable technology that I anticipate will impact our world in huge ways we haven’t even imagined, it has it’s own limitations and biases and requires tremendous work and data to function; it ain’t magic.

I use quotes around “AI” because they’re in fact LLMs and not actually “intelligent” in the sense Big Tech would have you believe. They’re stochastic parrots—an achievement in autocomplete perhaps but not an artificial brain by any means.

I’ve had to repeatedly explain to more than one excited layperson that they’re projecting an ability onto a system that literally cannot do what they’re excited about (e.g., “Can you believe what ChatGPT can do?!?”).

It’s a profound powerful valuable array of serious tools and we’re determined to embrace it with the same rush and juvenility as “web3”

Tom Goodwin

There are numerous indications that current progress in “AI” may already be peaking. Also, rumors of “model collapse” are brewing—a shocking but entirely predictable knock-on effect of how LLMs work. This essay isn’t specifically about whether “AI” is a scam, but it kinda is (if we keep selling it this way).

With a better understanding of how “AI” works and its true utility, there are far better and more accurate symbols out there. We could’ve chosen robots (🤖), dice (🎲), or even slot machines (🎰).

Better yet, why not—and stop me if you’ve heard this before—depict the actions these fancy buttons actually do for the user? From the article:

But since hitting the vague sparkles leads to a different action on each service, Saffar wonders whether it could prevent users from creating a mental model of how the product works, set unrealistic expectations, and leave them confused and annoyed.

I’d argue that nobody has that mental model, not even the very people selling this technology. We’re not using a visual metaphor for what the “AI” can do but for what we want everyone to hope it will do.

For most traditional UI icons, the visual metaphor is predetermined and easy to interpret. For instance, the “save” icon used to be a floppy disk (god, I’m old) because that’s where the software would put your file. The path from symbol to action was a very straight line. For sparkles that path is a labyrinth.

I know I’m ranting about this cute little icon but that’s because it’s more than just a user experience (UX) challenge.

When is an icon is just an icon? Never! Design is political. There’s always an underlying agenda that puts forth a certain idealogy or bias, whether we realize it or not. It’s propaganda, but for what? As Ruben Pater writes in his book:

The political system in which the designer works and lives cannot be disconnected from the design she/he creates. A political ideology is continuously being produced and communicated through design. Acknowledging this can give designers more agency in their practice to either serve or subvert the status quo.

Technology is political, too. Much has been said for years about the biases baked into algorithms or models by way of the programmers who created them. The energy consumption required to mine Bitcoin or summarize an article into bullet points rivals that of small countries. Big Tech is downright giddy about the jobs that can be replaced by “AI”.

I have to wonder what narrative is being given to us, and how. At best, it’s misguided techno-optimism in the name of progress. At worst, it’s a deliberate grift. But either way, the purpose of a system is what it does.

These trade-offs are apparently worth it for enough people because it’s happening as I write this. But let’s not dwell on the consequences, right? We can’t neatly summarize all of that nuance in a cute little icon anyway.

Technical illiteracy, marketing hype, copyright infringement, content theft, disinformation, rampant capitalism, environmental damage, societal upheaval, ethics in tech, the future of work—those fun little sparkles are doing a lot of heavy lifting!

Top comments (0)