The Problem
If you're building a full-stack app with JavaScript on both ends, you've got options. tRPC, Next.js server actions, Remix loaders—these let you share types directly between frontend and backend. Nice.
But if your backend is Kotlin, Go, Rust, or anything that isn't JavaScript? You're back to the old problem. The backend team nests user.email inside a new user.profile object. The frontend still reads user.email. Nobody catches it until production.
Or this one: the backend changes a field from required to optional, the frontend still assumes it's always there, and now you've got undefined is not an object in production.
I've been building web apps for a while. These problems keep showing up:
Contract drift. The backend and frontend have their own ideas about what the API looks like. They slowly diverge until something breaks.
Boilerplate everywhere. Every endpoint means writing route definitions on the server, then matching types in TypeScript, then fetch calls. If you're using OpenAPI for docs, you're writing the same information again in annotations. Same thing, four or five different ways.
Runtime surprises. TypeScript's type safety is great—until your types don't match reality. Then it's worse than no types at all, because you trusted them.
Documentation rot. You write API docs, they're accurate for about a week, then they slowly become fiction.
It's all the same problem: no single source of truth. The "real" API ends up existing in the backend handlers, the frontend types, someone's head—and they're constantly out of sync.
When I started building Subflag, a feature flag service, I ran into this immediately. Kotlin/Ktor backend, React/TypeScript frontend. Organizations, projects, flags, targeting rules, environments—70 endpoints across 16 API domains. Plenty of surface area for drift.
I wanted one source that defined the API. Everything else gets generated from it.
Here's how that went.
The OpenAPI-First Approach
Two ways to use OpenAPI:
Code-first: Write your backend, add annotations, generate the spec from your code. The code is the source of truth. The spec is a byproduct.
Spec-first: Write the OpenAPI spec first, generate code from it. The spec is the source of truth. The code has to match.
I went with spec-first. Here's why.
Code-first doesn't actually fix the drift problem. Sure, the spec describes what your backend does right now. But your frontend types are still written by hand. Change the backend, regenerate the spec, and... the frontend still has stale types. Unless someone remembers to update them.
And the spec itself? It's always playing catch-up. When you're rushing to ship, nobody's updating annotations. The documentation drifts too.
Spec-first flips this around. You edit one YAML file, run the generators, and both your Kotlin server interfaces and TypeScript client types update. Change the spec, everything follows.
The spec becomes the contract both sides compile against. Frontend expects a field the spec doesn't define? TypeScript catches it. Backend handler doesn't match the interface? Kotlin catches it. Drift becomes a compile error, not a production bug.
It also means documentation can't be an afterthought. With code-first, the spec comes after you write the code—so good descriptions and examples are extra work you do later. Or never. With spec-first, you're writing docs as you design the API.
The tradeoff: you have to write YAML before you write code. Some teams hate that. For me, it's become the natural way to think about an API—figure out the contract, then implement it.
The Architecture
Here's the setup:
┌─────────────────────────────────────────────────────────────────┐
│ api.yml │
│ (Single Source of Truth) │
└─────────────────────┬───────────────────────┬───────────────────┘
│ │
▼ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ Kotlin Generator │ │ TypeScript Generator│
│ (./gradlew build) │ │ (pnpm run generate)│
└──────────┬──────────┘ └──────────┬──────────┘
│ │
▼ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ Handler interfaces │ │ API client classes │
│ Request/Response │ │ TypeScript types │
│ DTOs │ │ │
└──────────┬──────────┘ └──────────┬──────────┘
│ │
▼ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ My handler impls │ │ React components │
│ (business logic) │ │ (uses typed client) │
└─────────────────────┘ └─────────────────────┘
One api.yml feeds two generators. Kotlin side gets server interfaces and DTOs. TypeScript side gets a typed API client. I just write business logic against the generated interfaces.
I'm using OpenAPI Generator—an open source project with generators for 50+ languages and frameworks. It can produce server stubs, client SDKs, or both. You pick the generator that matches your stack.
For scale: my spec is about 3,300 lines of YAML. 70 endpoints across 16 API domains (auth, flags, environments, targeting, etc.).
The Kotlin Side
I use the OpenAPI Generator Gradle plugin with the kotlin-server generator targeting Ktor:
tasks.named<GenerateTask>("openApiGenerate") {
generatorName.set("kotlin-server")
inputSpec.set("${rootDir}/api.yml")
outputDir.set(layout.buildDirectory.dir("generated").get().asFile.absolutePath)
templateDir.set("$rootDir/template") // Custom templates
packageName.set("com.subflag.generated")
modelNameSuffix.set("Dto") // Generated types get Dto suffix
additionalProperties.set(mapOf(
"library" to "ktor2",
))
}
The modelNameSuffix keeps generated DTOs separate from my domain models. CreateFlagRequestDto comes from the generator; Flag is my domain type.
The TypeScript Side
Frontend generation is simpler—just a shell script that runs before dev/build:
openapi-generator-cli generate \
-i ../server/api.yml \
-g typescript-axios \
-o ./generated \
--additional-properties=useSingleRequestParameter=true
This gives me typed API classes wrapping Axios. Full autocomplete, full type checking on every API call.
Wiring Up Authentication
The generated classes take a custom Axios instance. So I set up one instance with auth interceptors:
export const apiAxios = axios.create({
baseURL: API_BASE_URL,
withCredentials: true, // for refresh token cookie
});
// Attach JWT to every request
apiAxios.interceptors.request.use((config) => {
const token = getAccessToken();
if (token && config.headers) {
config.headers.Authorization = `Bearer ${token}`;
}
return config;
});
// Handle auth errors globally
apiAxios.interceptors.response.use(
(response) => response,
(error) => {
if (error.response?.status === 401) {
// session expired, redirect to login
}
return Promise.reject(error);
}
);
Then all generated API classes use this instance:
export const flagsApi = new FlagsApi(apiConfig, API_BASE_URL, apiAxios);
export const authApi = new AuthApi(apiConfig, API_BASE_URL, apiAxios);
export const projectsApi = new ProjectsApi(apiConfig, API_BASE_URL, apiAxios);
// ... 14 more
Auth lives in one place. The generated code just uses whatever Axios instance you hand it.
The Developer Workflow
Here's what adding an endpoint actually looks like. I'll use "create flag" as an example.
Step 1: Define the Endpoint in the Spec
Add the endpoint to api.yml:
/api/flags:
post:
tags:
- Flags
security:
- auth-jwt: []
parameters:
- $ref: '#/components/parameters/OrganizationParamReq'
- $ref: '#/components/parameters/ProjectParamReq'
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateFlagRequest'
responses:
'201':
description: Created - Flag created successfully
content:
application/json:
schema:
$ref: '#/components/schemas/Flag'
'409':
description: Conflict - Flag with this name already exists
operationId: createFlag
The operationId becomes the function name in generated code. The tags field determines the interface name. For example, all endpoints tagged with Flags get defined in FlagsApiHandler, Auth endpoints in AuthApiHandler, etc. That's how 70 endpoints stay organized into 16 interfaces.
security: auth-jwt means the generated route gets wrapped in authentication.
Step 2: Run the Kotlin Generator
./gradlew build regenerates everything. The generator produces an interface:
interface FlagsApiHandler {
val createFlag: suspend RoutingContext.() -> Unit
val deleteFlag: suspend RoutingContext.() -> Unit
val listFlags: suspend RoutingContext.() -> Unit
// ... one property per operationId
}
And a routing function that wires it up:
fun Route.FlagsApi(handlers: FlagsApiHandler) {
authenticate("auth-jwt") {
route("/api/flags") {
post(handlers.createFlag)
}
}
// ... rest of the routes
}
I write /api/flags once in the spec. The generator handles the rest.
Step 3: Implement the Handler
I create an object that implements the generated interface:
object FlagsHandlers : FlagsApiHandler {
// `createFlag` is the operationId from the spec
override val createFlag: suspend RoutingContext.() -> Unit = {
// Auth checks, get project context...
// Receive request using GENERATED DTO type
val request = call.receive<CreateFlagRequestDto>()
// Validation and business logic...
call.respond(HttpStatusCode.Created, FlagDto.fromDomain(flag))
}
}
CreateFlagRequestDto and FlagDto come from the generator. Mistype a field? Compiler catches it. Spec changes? DTOs change, and my code won't compile until I fix it.
Step 4: Wire It Up
In Routing.kt, I add one line:
routing {
FlagsApi(FlagsHandlers) // That's it
}
All 16 of my APIs are wired up the same way—each a single line.
Step 5: Use the TypeScript Client
On the frontend, pnpm run generate-api regenerates the client. In React:
import { FlagsApi } from "@generated";
const flagsApi = new FlagsApi(apiConfig, API_BASE_URL, apiAxios);
Then use it:
// The operationId `createFlag` becomes a method
const response = await flagsApi.createFlag({
organization: organizationName,
project: projectName,
createFlagRequest: {
name: flagName,
valueType: "BOOLEAN",
defaultValue: false,
}
});
// response.data is typed as Flag
const newFlag = response.data;
Add a required field to the spec? TypeScript errors until I provide it.
The Feedback Loop
The workflow:
- Edit
api.yml -
./gradlew build→ Kotlin interface updates - Compiler tells me what to implement
-
pnpm run generate-api→ TypeScript types update - TypeScript tells me what to fix in the frontend
No manual syncing. No "did I remember to update the types?" The spec drives everything.
Custom Kotlin Templates: Making It Actually Work
Here's the thing: the default OpenAPI Generator templates are built for the common case. My case (Ktor 2 with kotlinx.serialization) wasn't common enough.
The generator uses Mustache templates to produce code. You can override any template by dropping a modified version in a template/ directory. I started with the ktor2 templates and modified them.
The Handler Interface Pattern
This is the big one. Default templates generate inline route handlers—your business logic lives inside the generated code. Problem: you can't edit generated code. It gets overwritten on the next regeneration.
Default template approach (simplified):
This is what the generated route looked like before my changes:
fun Route.FlagsApi() {
route("/api/flags") {
post {
// Your business logic goes HERE, in generated code
val request = call.receive<CreateFlagRequest>()
// ...
}
}
}
I needed the generated code to define an interface and delegate to my implementation. So I can write business logic in my own files that don't get overwritten.
My template approach:
This is what the generated route looks like after my changes:
// Define an interface for handlers. My code implements this.
interface FlagsApiHandler {
val createFlag: suspend RoutingContext.() -> Unit
}
// Call this from my routing setup
fun Route.FlagsApi(handlers: FlagsApiHandler) {
route("/api/flags") {
post(handlers.createFlag) // Delegates to MY code
}
}
The template that generates this (api.mustache):
interface {{classname}}Handler {
{{#operations}}
{{#operation}}
val {{operationId}}: suspend RoutingContext.() -> Unit
{{/operation}}
{{/operations}}
}
fun Route.{{classname}}(handlers: {{classname}}Handler) {
{{#operation}}
{{#hasAuthMethods}}
authenticate("{{{name}}}") {
{{/hasAuthMethods}}
route("{{path}}") {
{{#lambda.lowercase}}{{httpMethod}}{{/lambda.lowercase}}(handlers.{{operationId}})
}
{{#hasAuthMethods}}
}
{{/hasAuthMethods}}
{{/operation}}
}
Mustache loops over operations and operation (provided by the generator). I template out the interface properties and route wiring. {{#hasAuthMethods}} wraps routes in authenticate() when the spec has security requirements.
The @Serializable Fix
Ktor 2 uses kotlinx.serialization. Default templates assume Gson or Jackson. Generated DTOs wouldn't serialize.
Fix in data_class.mustache:
import kotlinx.serialization.Serializable
@Serializable
{{#hasVars}}data {{/hasVars}}class {{classname}}(
// ... fields
)
That's it—unconditionally add the annotation. Every generated DTO now serializes correctly.
Vendor Extensions for Edge Cases
Some fields need special handling that the spec can't express natively. OpenAPI supports vendor extensions (custom properties prefixed with x-) that templates can read.
My feature flag values can be any type: boolean, string, number, or object. In OpenAPI, that's type: object, which generates kotlin.Any. But kotlinx.serialization can't handle Any without an explicit @Contextual annotation.
In the spec:
value:
type: object
description: Flag value (can be boolean, string, number, or object)
x-is-any-type: true # My custom extension
In the template (data_class_req_var.mustache):
{{#vendorExtensions.x-is-any-type}}@kotlinx.serialization.Contextual {{/vendorExtensions.x-is-any-type}}{{>modelMutable}} {{{name}}}: {{{dataType}}}
Now fields marked with x-is-any-type: true get the annotation they need.
Post-Generation Patching
Some fixes can't be done in templates at all. Mustache is intentionally logic-less—it can output variables and loop over collections, but it can't do string manipulation or conditional type replacement.
For example, Map<String, Any> doesn't serialize with kotlinx.serialization. I need to replace it with JsonObject. But the template just outputs {{{dataType}}}—whatever the generator provides. There's no way to say "if dataType contains Map, output JsonObject instead."
The solution: a Gradle task that patches generated files after generation but before compilation:
tasks.register("patchGeneratedDtos") {
dependsOn("openApiGenerate")
doLast {
generatedDir.listFiles { _, name -> name.endsWith("Dto.kt") }?.forEach { file ->
var content = file.readText()
// OffsetDateTime needs @Contextual for custom serializer
content = content.replace(
Regex("""(?<!@Contextual )(val \w+: java\.time\.OffsetDateTime)"""),
"@kotlinx.serialization.Contextual $1"
)
// Map<String, Any>? can't serialize—replace with JsonObject?
content = content.replace(
Regex("""(val \w+): kotlin\.collections\.Map<kotlin\.String, kotlin\.Any>\?"""),
"$1: kotlinx.serialization.json.JsonObject?"
)
file.writeText(content)
}
}
}
tasks.named("compileKotlin") {
dependsOn("patchGeneratedDtos")
}
This runs after generation but before compilation. It's not elegant, but it works reliably.
Could I fix this upstream in OpenAPI Generator? It's open source. But it's a big codebase, the fix would need to handle all Kotlin serialization options, and I'd be signing up to maintain it. For two regex replacements, a post-processing script is fine.
Keeping It All in Sync
This only works if both generators actually run. I automated it so I can't forget.
Frontend generator runs on pnpm start and pnpm build via pre-scripts:
{
"scripts": {
"prestart": "pnpm run generate-api",
"prebuild": "pnpm run generate-api",
"generate-api": "./scripts/generate-api.sh"
}
}
Update api.yml and forget to regenerate? Starting the dev server catches it. TypeScript flags any breaking changes.
Kotlin side runs generation as part of ./gradlew build, so it's always current.
On Teams
PRs become about the spec, not the implementation. You're reviewing api.yml changes—"is this the right shape?"—and once that's agreed on, the implementation is just implementation.
Merge conflicts in YAML will happen. Thousands of lines, multiple people. You'd want conventions around who owns what, or split the spec into multiple files ($ref supports this).
I don't commit generated code—it regenerates on every build. That means the build itself enforces consistency. Change the spec, the generated interfaces change, your code won't compile until it matches. If you do commit generated code, you'd want CI to regenerate and fail on any diff.
When This Pattern Might Not Fit
Spec-first has an upfront cost: learning Mustache templates, configuring generators, debugging serialization issues. Once it's working, the day-to-day is actually faster—less boilerplate than writing everything twice.
But it might not be worth the setup if:
- Same language everywhere: TypeScript on both ends? Tools like tRPC give you type safety without the YAML.
- Tiny project: Five endpoints, one developer? The generator setup might take longer than just writing the code.
- Team won't adopt it: The benefits require everyone to follow the workflow. If people bypass the spec, you're back to drift.
It pays off most when you have multiple consumers, different languages, or a team that benefits from the spec as documentation.
What I Got Out of It
After 70 endpoints:
Zero manual route definitions. Every route comes from the spec. I write business logic; the generator handles wiring.
Compile-time safety across languages. Kotlin handler doesn't match the spec? Won't compile. TypeScript client uses a nonexistent field? Won't compile. Drift is impossible.
Documentation that can't rot. The spec is the documentation. Not a separate artifact that falls out of sync—it's the source that generates the code.
Onboarding is easier. New devs read the spec to understand the API. No reverse-engineering handler code scattered across files.
LLMs work better. I point AI assistants at api.yml and they immediately understand every endpoint, request shape, and response type. No ambiguity, no hunting through files. The spec is a perfect context document—structured, complete, always current. Single source of truth works for humans and machines.
The upfront investment was real—custom templates, post-processing patches, learning generator quirks. But that was one-time. Now adding an endpoint is: update the YAML, implement the handler, use the client. The spec keeps everything honest.
If you're building a multi-language stack and tired of keeping types in sync manually, worth a look.
What's Next
Things I haven't done yet but probably should:
Request validation. The spec already defines constraints—required fields, string patterns, min/max values—but I'm not using them. I still validate manually in handlers. There are libraries that can validate requests against the spec automatically. I should probably use one.
Split the spec into multiple files. 3,300 lines in one file is a lot. OpenAPI lets you $ref external files, so I could break it up by domain: auth.yml, flags.yml, targeting.yml. Just haven't gotten around to it.
Generated API docs. Swagger UI, Redoc, that kind of thing. Once you have a spec, these are basically free. I just haven't set it up.
Spec-driven testing. I'm not using the spec for testing yet, but I'd like to. Prism can mock the API from the spec—run frontend tests without the real backend. Contract testing can verify the server actually matches the spec. Could generate test fixtures from example responses. Lots of options I haven't explored.
Better error schemas. My errors are just mapOf("error" to message). Kind of sloppy. Proper error schemas in the spec would make the frontend error handling cleaner.
What Do You Think?
Is this overkill? Writing YAML before code, customizing Mustache templates, patching generated files—it's a lot of machinery. Maybe you look at this and think "I'd rather just write the types twice."
Fair. It's not for every project.
But if you've felt the pain of API drift, or you're tired of keeping frontend types in sync with backend changes, maybe it's worth the setup. I'd be curious whether anyone else is doing something similar—or if you've tried it and decided it wasn't worth it.
Would you use this? Let me know.
Top comments (0)