One thing I've seen more than once in integration work is this: the vendor API becomes the problem.
At the start, it always looks straightforward. There's an API, the docs exist, and the plan seems clear enough.
Then the real work starts.
A field comes back in a different format than expected. An endpoint works in one environment and then acts differently in another. Something that looked fine in testing starts failing once real data or real usage hits it. We have also had cases where the docs looked fine, but the actual response shape coming back from the API was not something we could safely trust.
Or the API works, but not well enough to build the rest of the project around it with any confidence.
That is usually the point where you have to make a choice.
You can wait for the vendor to sort it out, which usually means your timeline starts depending on theirs.
Or you can accept that the API is not stable enough and design around that.
Most of the time, I would rather do the second.
If an API is shaky, I do not want the rest of the system tightly tied to it. I would rather isolate it and reduce the damage. Sometimes that means putting a small layer in front of it. Sometimes it means normalizing the response before the rest of the system touches it. Sometimes it means retries, queues, or some temporary workaround so one bad dependency does not slow everything else down.
Nothing fancy. Just enough to stop one unreliable piece from controlling the whole project.
I think this is where a lot of teams lose time. Everyone keeps waiting for the vendor's "real fix," even when it is obvious that waiting is just blocking the rest of the work.
I would rather keep things moving and clean it up later if the vendor side improves.
For me, the point is simple: if a vendor API breaks, the project should not break with it.
Top comments (0)