DEV Community

Cover image for Solved: Who’s hiring Typescript developers December
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: Who’s hiring Typescript developers December

🚀 Executive Summary

TL;DR: Running TypeScript applications in production requires careful transpilation into JavaScript, as Node.js only executes JavaScript, leading to potential deployment failures if mishandled. The industry-standard multi-stage Docker build pattern provides a reproducible, secure, and optimized way to compile TypeScript and deploy only the resulting JavaScript and production dependencies.

🎯 Key Takeaways

  • Node.js environments execute JavaScript, necessitating a transpilation step for TypeScript applications before deployment.
  • Multi-stage Docker builds are the recommended approach for deploying TypeScript to production, ensuring reproducibility, security, and minimal image size by separating build and runtime environments.
  • Compiling TypeScript locally and copying the dist folder into a Docker image leads to irreproducible builds and should be avoided for shared development or production environments.
  • Using tools like ts-node or tsx for on-the-fly transpilation in production is a dangerous practice, causing significant performance overhead, larger image sizes, and potential runtime type errors.
  • Proper use of a .dockerignore file is critical in multi-stage builds to prevent local node\_modules and dist folders from being copied into the build context, ensuring clean dependency installation within the container.

Struggling with complex TypeScript build steps in your Dockerfiles? Let’s cut through the noise and look at three real-world methods for getting your TS app from your IDE to production, from the quick and dirty to the rock-solid.

Forget “Who’s Hiring”—Let’s Talk “How’s it Running?” Taming TypeScript in Production

I remember a 2 AM PagerDuty alert like it was yesterday. The prod-api-gateway-01 deployment was failing, and the on-call junior dev was panicking. The error? tsc: command not found. A simple, well-intentioned update to tsconfig.json had been pushed, but the CI/CD pipeline, which we thought was bulletproof, choked on it. We rolled back, but the incident stuck with me. We spend so much time writing beautiful, type-safe code, but we often treat the process of actually *running* it in production as an afterthought. It’s not. That translation from TypeScript to runnable JavaScript is where robust systems are made or broken.

The “Why”: You Write TS, But Node Runs JS

Let’s get one thing straight: Node.js does not run TypeScript. V8, the engine under the hood, understands JavaScript and only JavaScript. TypeScript is a developer tool—a “transpiler” that reads your .ts files and spits out .js files. The core problem everyone faces is deciding when and where this translation happens. Do you do it on your machine? In your CI pipeline? Inside the container itself? Each choice has massive implications for build speed, security, and stability. Getting this wrong is how you end up with 2 AM alerts.

So let’s walk through the options, from the one that’ll get you fired to the one that’ll get you promoted.

Solution 1: The ‘It Works On My Machine’ Band-Aid

This is the first thing everyone tries. You’re in a hurry, you just want to see it run. The logic is simple: compile the code on your own laptop, and then just copy the finished JavaScript into the Docker image.

You’d run npm run build on your terminal, which creates a /dist directory. Then your Dockerfile looks deceptively simple:

# Dockerfile - THE BAD WAY
FROM node:18-alpine

WORKDIR /app

# Copy the pre-built application code
COPY ./dist ./dist
# Copy production dependencies manifest
COPY ./package.json ./
COPY ./package-lock.json ./

# Install ONLY production dependencies
RUN npm ci --only=production

CMD ["node", "dist/main.js"]
Enter fullscreen mode Exit fullscreen mode

Why it’s a trap: This is fine for a five-minute test, but it’s a nightmare for a team. The build isn’t reproducible—it depends entirely on the environment of the machine that ran npm run build. Did your co-worker have a different Node version? A slightly different dependency? You’ll be debugging phantom errors for days. It also bloats your Docker build context, sending your entire source tree over to the daemon when it only needs the dist folder.

Darian’s Take: I call this the “hope-and-pray” deployment. You’re hoping that your local environment perfectly matches what every other developer has and what the CI server expects. Hope is not a strategy. Avoid this for anything that touches a shared branch.

Solution 2: The Professional’s Choice – The Multi-Stage Docker Build

This is the industry standard for a reason. It’s clean, reproducible, and secure. The idea is to use one Docker “stage” as a temporary, disposable build environment, and a second, final stage that contains *only* the lean, optimized production code.

Here’s what a proper multi-stage Dockerfile looks like:

# STAGE 1: The Builder
# We use a full Node image here because we need the TypeScript compiler and dev dependencies.
FROM node:18 as builder

WORKDIR /app

# Copy dependency manifests
COPY package.json ./
COPY package-lock.json ./

# Install all dependencies (including dev)
RUN npm ci

# Copy the rest of your app's source code
COPY . .

# Build the TypeScript project into JavaScript
RUN npm run build

# ---

# STAGE 2: The Production Runner
# We use a slim 'alpine' image for a tiny footprint.
FROM node:18-alpine

WORKDIR /app

# Copy ONLY the production dependencies manifest from the builder
COPY --from=builder /app/package.json ./
COPY --from=builder /app/package-lock.json ./

# Install ONLY production dependencies
RUN npm ci --only=production

# Copy the compiled JavaScript output from the builder
COPY --from=builder /app/dist ./dist

# Run the app
CMD ["node", "dist/main.js"]
Enter fullscreen mode Exit fullscreen mode

Why it’s the right way: The first stage has the full TypeScript compiler, all your devDependencies, and does the heavy lifting. The second stage starts fresh with a tiny base image and copies *only* the compiled dist folder and the production node\_modules from the builder stage. The final image is small, secure (no build tools included!), and the build is 100% self-contained and reproducible on any machine.

Pro Tip: Don’t forget your .dockerignore file! You need to explicitly ignore your local node_modules and dist folders. Otherwise, you’ll copy them into the build context, defeating the purpose of a clean install within the container.

Solution 3: The ‘Break Glass In Case of Dev’ Option – On-the-Fly Transpiling

Then there’s the “just run the TypeScript directly” approach using tools like ts-node or tsx. These tools transpile your TS code in memory at runtime. It’s incredibly convenient for local development.

Your package.json might have a script like "start:dev": "tsx watch src/index.ts". Some folks get tempted to use this in production:

# Dockerfile - THE DANGEROUS WAY
FROM node:18

WORKDIR /app

COPY package.json ./
COPY package-lock.json ./

# Installs ALL dependencies, including the heavy tsx/ts-node
RUN npm ci

COPY . .

# Run the app via the transpiler
CMD ["npx", "tsx", "src/main.ts"]
Enter fullscreen mode Exit fullscreen mode

Why it’s a ‘Nuclear’ Option for Production: This is a performance disaster. Your application startup time will be significantly slower because it has to compile everything first. Worse, type errors that the tsc build step would have caught can now become runtime errors if a specific code path is hit. You’re shipping your entire toolchain and source code into production, creating a massive, insecure image. It’s the equivalent of driving a car while the mechanic is still working on the engine.

WARNING: Never, ever do this in production. I’ve personally cleaned up the mess after a team deployed an API using ts-node. A memory leak in the transpiler under heavy load brought down their entire service. Use these tools for what they’re great at: local development. Keep them out of your production Dockerfile.

So, Which One Should You Use?

Picking the right strategy isn’t about preference; it’s about engineering discipline. While a quick-and-dirty method might seem to save you five minutes today, it’ll cost you hours of debugging under pressure tomorrow. Here’s a quick cheat sheet:

Method Prod Ready? Image Size Build Speed Complexity
1. Local Build & Copy No Small Fast (locally) Very Low
2. Multi-Stage Docker Build Yes, Absolutely Smallest Slower (but correct) Medium
3. On-the-Fly Transpiling DANGEROUSLY NO Largest Fastest (deceptive) Low

Invest the time to learn and implement multi-stage builds. Your future self—and your on-call team—will thank you for it.


Darian Vance

👉 Read the original article on TechResolve.blog


Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Top comments (0)