The photo graveyard — why your camera roll is full of things you’ll never look at again
You take a photo of a parking spot so you can find your car later. You take a photo of a wine label you liked. You take a photo of a whiteboard at a conference, a receipt for an expense you’ll submit, a book recommendation a friend wrote on a napkin, the tag inside a sweater because you might want to buy another one in a different color, the back of a Wi-Fi router because the password is on it and you’re going to need it again in this house in a month. You take these photos at a rate of maybe ten or twelve a week. You almost never look at any of them again. They are now, statistically, the majority of your camera roll, and they will remain there until you die or change phones.
This is the friction this post is about. I’m going to argue it’s worth taking seriously, that the existing solutions are all wrong in instructive ways, and that the reason nobody has solved it is interesting enough on its own to be worth a post even if the friction itself turns out not to be a product.
The thing that’s distinctive about photo-graveyard photos is that they’re not memories. The camera, as a device, was originally designed for memory — birthdays, weddings, vacations, your kid’s face at three. The shape of every photo product, from the album to iCloud Memories to the share-with-grandma flow, is built around that assumption. Photos are precious. Photos accrue emotional value. Photos are the artifact you want to preserve and return to. Software that touches photos has been built, almost without exception, on top of this premise.
But the photos I described in the opening paragraph aren’t precious. They have no emotional value. They were captured for retrieval, not for preservation, and the moment of retrieval almost always fails to come — either because you forgot the photo existed, or because by the time you remembered it you couldn’t find it among the thousands of others, or because the act of searching for it took longer than just solving the problem a different way (re-typing the Wi-Fi password by squinting at the router, asking the friend for the recommendation again, walking around the parking garage hoping). The photo-as-retrieval-object is a fundamentally different artifact than the photo-as-memory, and almost no software treats it differently.
The existing solutions, when you look closely, are all variations on better search. Apple Photos lets you search by content — “wine,” “receipt,” “whiteboard” — and it works astonishingly well as raw text recognition and image classification. Google Photos does the same, sometimes better. The problem is that better search assumes the user remembers they took the photo and is actively trying to find it. The actual failure mode is that the user has forgotten the photo exists, and search doesn’t help with forgetting. You can’t search for something you don’t remember having.
A few apps have tried to attack this from a different angle. Apple’s Live Text lets you tap into a photo and pull text out of it as if it were a document, which is genuinely useful for receipts and Wi-Fi passwords if you remember to look in the photo. Notion-style scrap apps like Captur, Mem, or various others let you take a photo and immediately tag it with intent (“car parked,” “wine to remember”) which is the right idea but requires the user to do the tagging work in the moment, which is exactly the moment the user is least willing to do work — they’re already in a hurry, that’s why they’re taking a photo instead of writing a note. Reminders-with-photo is a category Apple has been quietly building toward, where you take a photo and the system offers to set a reminder around it, which is closer but still requires deliberate intent at capture time.
The deeper problem, and the part that makes this friction philosophically interesting rather than just annoying, is that the retrieval prompt is the missing piece. You don’t need better photo search; you need the photo to find you at the moment you need it. The parking-spot photo should resurface when you walk back into the parking garage, not when you remember to search for “parking.” The wine label should resurface when you’re standing in a wine shop, not when you’re trying to remember which wine you liked three months ago. The Wi-Fi password should resurface when your phone connects to a new network in that house. None of these surfacings require fancy AI — they require spatial and temporal context awareness that the device already has and chooses not to use for this purpose.
Which raises the question of why nobody has shipped this. The technical pieces are all available. iOS exposes geofencing, location triggers, time-of-day awareness, beacon detection, and on-device image classification. Android exposes more. The hardware is willing. The problem, I think, is that the category is awkwardly positioned: it’s too small for Apple or Google to prioritize as a major feature, but too platform-dependent for an indie developer to build well, because the most important capabilities (deep integration with the camera roll, system-level location triggers, lock-screen surfacing) are restricted to the platform vendors. An indie developer can build most of this, but the version they can build is meaningfully worse than the version Apple could build in a weekend, which Apple shows no sign of doing.
So is it a product? My honest answer, after thinking about it more carefully than I expected to when I started writing this post, is probably not in the form most builders would attempt it. A standalone “photo memory” app would face the cold-start problem that ruins all secondary photo apps — users won’t switch their primary camera roll, which means your app only sees photos they deliberately route to it, which means the retrieval problem you’re solving is a fraction of the real problem. The version that would work is a feature inside a primary system the user already lives in — a camera roll, a notes app, a maps app, a wallet — where the retrieval prompt becomes a natural extension of an existing surface. That’s a feature, not a product, and it’s a feature most likely to be built by Apple or Google when they decide to.
The Tiny Frictions point of view, which I’m working out as I write this series, is that not every friction is worth being a product. Some frictions are worth naming clearly so that the platforms eventually solve them, and some frictions are worth solving in a deeply embedded way inside an adjacent product, and some are worth living with because the cost of solving them exceeds the cost of tolerating them. The photo-graveyard friction is, I think, in the second category — it’s a feature inside a product I haven’t built, and the product I haven’t built is something like a contextual scratchpad that lives in the small gap between the camera, the notes app, and the reminders app. Whether that’s a thing I or anyone else should build is a question for a different post.
What I want from you in the comments is the friction in your own life that has this same shape — the thing you do constantly, the thing you’ve worked around with a half-solution, the thing nobody seems to have solved properly. The next post in this series will probably be one of yours.
quackbuilds.com — @itsevilduck 🦆
Top comments (0)