DEV Community

Praveen Kumar
Praveen Kumar

Posted on

No code for a week. That is why we could measure the AI ROI.

A medical equipment marketplace came to us last year. Their job was connecting hospitals and procurement teams to manufacturers and distributors. Every match was taking four to eight months. Not because anyone was slow or careless. Just emails, spreadsheets, and trade show conversations that somehow turned into a months-long back and forth every single time. They wanted to bring AI in and fix it.
Before anyone opened a code editor, I asked for a week. No proposals, no architecture diagrams. Just sitting with the team, watching how work actually got done, then getting on calls with the people using the platform.

There is a moment in those conversations I always wait for. Someone starts by walking you through the process, and then at some point they stop doing that and just tell you what it actually costs them. Not as a complaint. Just plainly.

That deal did not close because we could not find the right supplier in time. That supplier went with someone else because the response took too long. That quarter came in slower than it should have.
That is what you are listening for. Not where does this feel inefficient. Where does slow actually hurt the business.

For this team it kept coming back to the matching. A buyer would come in with a requirement, someone on the team would go through the supplier database by hand, factor in location, certifications, availability, past transactions, and build a shortlist. Same logic, done from scratch, every single time.

So that is where the AI went. But before we wrote a single line, we sat down and agreed on the exact numbers we were trying to move. Time to first qualified match. Deals reaching the next stage within the first month. Revenue closing in the quarter it came in instead of slipping to the next one. We named those things, wrote them down, and put someone's name next to each one.

When we looked at the numbers after, time to first qualified match had come down from around three weeks to under two days. Deals moving to the next stage in the first month went from 30 percent to close to 60. Revenue that had been slipping started closing in the same quarter it came in.

The ROI was measurable because we had decided what to track before we started. That is the only reason we could say with any confidence that the AI had done anything at all.

Most teams never have that conversation. They ship into whatever looked promising in the demo, and by the time someone asks what the AI actually delivered, it is a year in and the answer is harder to sit with than it needed to be.

The ROI was there because the question got asked before the build started. Most teams ask it a year in. By then, the answer is never the one anyone wanted.

What did you actually measure? Or is that conversation still waiting to happen?

Top comments (0)