I hate writing client/server code. Why? Look at this example:
export interface User {
name: string
}
export async function greet(user: User) {
await doSomeStuff(user)
return `Hellow ${user.name}`
}
And this client code using it:
const jack: User = { name: 'Jack' }
greet(jack).then(console.log)
If my greet()
function is executed in the client environment, then I just need to import it and I'm done. Everything works perfectly, I get proper type-checking, etc. But if it has to be executed on the server environment, then I would need to add this network layer boilerplate-ish to the server to make it work:
import express from 'express'
import cors from 'cors'
import { greet } from './my-func'
const app = express()
app.use(cors())
app.post('/greet', async (req, res) => {
const user = JSON.parse(req.query.user)
const response = await greet(user)
res.status(200).send(response)
})
app.listen(4000)
And this boilerplate-ish to the client:
export interface User {
name: string
}
export async function greet(user: User) {
const stringified = JSON.stringify(user)
const encoded = encodeURIComponent(stringified)
const response = await fetch(
`https://my-server:4000/greet?user=${encoded}`,
{ method: 'POST' }
)
return await response.text()
}
The problem is not just that this is a lot of boilerplate, but also:
It is boilerplate that also introduces (and depends on) TONs of arbitrary decisions. The http method is one example (is it / was it / should it be POST / GET / PUT?), the URL is another one, as is the place where the parameters are put (the body, the query parameters, the URL itself, etc).
I've lost any meaningful type-checking now. I am maintaining two versions of
User
interface and two definitions ofgreet()
function that I need to keep in sync manually.
Sharing Types
If we look at the main server code (where greet()
and User
are defined) and frontend boilerplate code, we can see that these two files have identical types. The body of greet()
differs between them, but all type declarations are exactly the same (TypeScript would generate the same .d.ts
file for both):
export interface User {
name: string
}
export declare function greet(user: User): Promise<string>
What if we could share this type definition between server and client? This way, we would have a single source of truth for our types, while the client would get seamless type-checking.
How could we share type definitions? Well, people coding in TypeScript need to use lots of pure JavaScript libraries without loosing type checking, so TypeScript allows adding independent type definitions alongside JavaScript code. This means we can have different versions of our functions (e.g. greet()
) to actually share their type definitions, which is exactly what we need since our functions, though identical in type, need to behave differently on the server and on the client.
This would mean that we would need to write the frontend network layer code in JavaScript, then extract type definition files from backend code and set them alongside each other. It would resolve the type checking issue, but introduce the problem of manually maintaining a JavaScript code that needs to be in sync with those type definitions.
Auto-Generating Boilerplates
Well what if we could auto-generate the JavaScript code of the frontend boilerplate as well? If written in pure JavaScript, this boilerplate would look like this:
function greet(user) {
const stringified = JSON.stringify(user)
const encoded = encodeURIComponent(stringified)
const response = await fetch(
`https://my-server:4000/greet?user=${encoded}`,
{ method: 'POST' }
)
return await response.text()
}
module.exports = { greet }
To write this code, we would need to know the following (and nothing more):
- The name of the function
greet()
- The URL of the corresponding endpoint
- The http method of the corresponding endpoint
- Where parameters should be injected (request body, header, url, query parameters, etc)
Note that the last 3 are the exact same problematic arbitrary choices we encountered in problem #1. Since the choices are (mostly) arbitrary, we could just decide on them based on the only non-arbitrary parameter here, i.e. the function name. For example, we could follow this convention:
π If the function name is "getX()":
- the URL would be "/x"
- the method would be GET
- parameters would be in query params
π If the function name is "updateX()":
- the URL would be "/x"
- the method would be PUT
- parameters would be in request body
π If the function name is "createX()":
- the URL would be "/x"
- the method would be POST
- parameters would be in request body
π If the function name is "x()":
- the URL would be "/x"
- the method would be POST
- parameters would be in request body
This means that knowing only the names of the functions and assuming they follow this convention, we could fully auto-generate the client-side boilerplate.
The backend boilerplate would also need to strictly follow this convention. Fortunately, that code can also be fully auto-generated knowing the names of the functions and following the same convention:
import express from 'express'
import cors from 'cors'
import { greet } from './my-func'
const app = express()
app.use(cors())
app.post( // --> from the convention
'/greet', // --> from the convention
async (req, res) => {
const user = JSON.parse(req.body.user) // --> from the convention
const response = await greet(user) // --> from function name
res.status(200).send(response)
}
)
app.listen(4000)
Putting Everything Together
Let's recap a bit:
π Typical client/server code is problematic because:
- It has lots of boilerplate code with arbitrary decisions in it
- It takes type-checking away
π To fix that:
- We can share type definitions
- We can auto-generate client network layer boilerplate knowing function names and following some convention
- We can auto-generate server network layer boilerplate knowing function names and following some convention
All of these fixes rely on knowing the name of the server functions we want to use on the client-side. To fix that issue, lets add another rule: we will export all such functions from index.ts
on our server-side code, and our client/server code will be reduced to the following:
// server/index.ts
export interface User {
name: string
}
export async function greet(user: User) {
await doStuff(user)
return `Hellow ${user.name}`
}
// client code
import { greet } from '<auto-generated-code>'
const jack: User = { name: 'Jack' }
greet(jack).then(console.log)
Will this really work? Well I have actually built a CLI tool that does exactly what I've described here to find out. You can try it out for yourself:
π Install the CLI tool:
npm i -g tyfon
π Create a folder for server-side code:
mkdir test-server
cd test-server
npm init
π Add the server code:
// test-server/index.ts
export interface User {
name: string
}
export async function greet(user: User) {
return `Hellow ${user.name}`
}
π Run the server:
tyfon serve
π You can already try out your server:
curl -d '{"0":{"name":"Jack"}}' -H "Content-Type: application/json" -X POST localhost:8000/greet
π Create a folder for the client-side code (in another terminal, keep the server running):
mkdir test-client
cd test-client
npm init
npm i -g ts-node # --> if you don't have ts-node
π Autogenerate network boilerplate:
tyfon i localhost:8000
π Add the client code:
// test-client/index.ts
import { User, greet } from '@api/test-server'
const jack: User = { name: 'Jack' }
greet(jack).then(console.log)
π Try it out:
ts-node .
Observations
Although the TyFON CLI tool is pretty young and the concept of using type definitions as API specification is new (at least to me), I've been using it in real-life projects for some time now, and like everything else, there are pros and cons to this approach:
Pros
- Cleaner Code: I write simple functions on the server, and I call them on the client-side. The network layer (and all its boilerplate and hassle) completely vanishes.
- Strong Type Checking: When I make changes to server code that require changes in the client, my IDE will tell me, or when I want to call some server function, I don't need to go check a list of API URLs, the IDE suggests to me all the functions at my disposal.
- Single Source of Truth: All my data types are now defined in the server and seamlessly used in the client.
Cons / Oddities
- No Network Layer Access: The down-side of completely masking the network layer is that you won't have access to the network layer. This means right now I cannot put stuff in request headers or handle file uploads, though I've got some ideas of for tackling that issue
- No Middleware: I was used to Express middlewares that worked in tandem with Angular interceptors, to for example make authentication happen behind the scenes. Without any access to the network layer, all of that is gone as well, which means I have to explicitly pass auth tokens around now.
- New Security Concepts: Now I need to consider whether a server function is to be used internally by other functions or can it be safely used over network as well.
All in all, I am pretty happy with the early results of this approach. Of course as with anything new there are downsides and stuff that I would need to get used to, but the increase in development speed (and my confidence in the generated code) is so much that I will happily make that exchange for all possible future projects.
Top comments (2)
Hi, thanks for rising this point. I personally solved the issue by writing my server controller in typescript with tsoa annotations that generate a swagger. From it I can generate the ui stub. So, the server and client are sync.
Yes I've also seen TSOA and it seems pretty close in concept, though still it bears some overhead compared to TyFON (and of course in exchange it is more flexible and versatile).