Working on AI platform development these days, I find myself thinking about many things. I often encounter expectations that adopting AI will bring about some revolutionary change, but working in the field, I sometimes see a different picture.
I often hear that AI isn't a magic wand. I agree to some extent. When you layer AI on top of inefficient processes, it only speeds things up without solving the fundamental problems. Sometimes I think AI is closer to a tool that accelerates what you're already doing.
Of course, AI's strengths are clear. It excels at handling unstructured data like emails, messages, and documents. But ironically, the tasks dealing with this kind of data are often not clearly defined. They exist only in the heads of experienced people, or are passed down as "that's just how we've always done it."
That's why I think it's worth organizing a few things before applying AI. Where does the data come from? What needs to be extracted and interpreted? What systems will the results be reflected in? AI seems to work better once this structure is somewhat clear.
Seeing the Essence of Problems
This makes me think about something else. Organizing processes is ultimately about looking at the essence of problems. And this role usually falls to the PO (Product Owner).
Being a PO is something anyone can do, but not just anyone should do—that's my thinking. It might sound old-fashioned, but that's what I've observed in the field over the past few years. When problems arise, I often see a tendency to focus only on symptoms and try to fix only the symptoms.
In fact, quickly grasping symptoms and presenting solutions is an area where AI does quite well. Gathering data, finding patterns, referencing similar cases to provide answers. But looking at the essence of problems seems to be a different dimension entirely. Sensing what lies beneath the surface, reading context between things that don't seem connected—I think this requires deep experience and insight as a foundation.
When I discuss these things, I often think of SpaceX. When Elon Musk started his space venture, everyone said "space development is just expensive by nature." At the time, it cost about $50,000 to send 1kg of cargo on a space shuttle. No one seemed to think that was strange.
But Musk kept asking "Why is it so expensive?" And the answer turned out to be surprisingly simple. Rockets costing billions of dollars were being used once and dumped in the ocean, so of course it was expensive. It was the same structure as flying a plane once and scrapping it.
So they built reusable rockets, and now the cost has dropped to about $2,200 per kg—more than a 20-fold reduction. The problem wasn't "space development is expensive" but "rockets are disposable."
Thinking about what's different, it wasn't fixing symptoms—it was changing the structure. That's what seeing the essence means, I realized.
Where to Deploy People
Looking back at the past year, Plab definitely has good culture and processes. But there are also areas for improvement. No individual can do everything well. Ultimately, what matters is how you position and utilize people who are good at different things.
If there's something I want to solve and someone holds the key to it, I need to have more conversations with that person and build a relationship. Helping each other and working together—perhaps that's the essence of working in an organization.
Why Can't We Do the Same with People?
On the other hand, I also think about this. When I'm vibe-coding with AI, I give feedback very frequently. Dozens or hundreds of revisions, corrections, and pushbacks in a day. This doesn't seem right, try again, let's go this direction—I speak without hesitation.
But I can't do that with people. I'm probably doing less than 1/100th of what I do with AI. Worried about making the relationship awkward, afraid of hurting them, or just because it's bothersome—for various reasons, I delay feedback.
Yet we expect people to grow and change. We expect results without giving feedback. Perhaps being honest, frequent, and specific with people—like we are with AI—might actually be better for them.
Where Technology Belongs
Building an AI platform, I have these concerns. Making good technology is important, but I think we also need to look at whether the place for that technology is well-prepared, and whether the right people are positioned to prepare that place.
I keep learning like this every day. Since growing people is such an important job as a leader, I write this page in my diary before leaving work, hoping my learning won't end with just me.
Top comments (0)