I often notice how careless some developers are about the security of their applications. They only begin to think about protection methods when they have to rewrite a large portion of the application. Today, we'll cover classic and other attack methods, examine where the compiler falls short, and build modern protection based on best practices and specific code examples.
This article specifically provides simplified attack methods and vulnerability examples to make it easier to understand the mechanics.
Introduction
TypeScript has undoubtedly become one of the leaders in web development. It's used to build powerful React applications and complex microservices on Nest or Fastify. Developers often value type safety, but this isn't classic security, as a string in TypeScript is still just a string, potentially vulnerable to SQL injection and XSS vulnerabilities. The compiler doesn't check business logic, doesn't filter input data, and doesn't detect that you've shared a JWT secret in a public repository.
I built this article around a simple principle: types are not a defense, but a tool of discipline. We'll examine attacks and defenses on two key platforms:
- Backend (Node.js, Express/Fastify/NestJS): injections, prototype pollution, unsafe deserialization, data leaks through errors.
- Frontend (React, Next.js, Angular): XSS, CSRF, prototype poisoning through dependencies, sensitive data leaks, SSR attacks.
In each section, I've provided real code examples, a simple explanation of the vulnerability, and mitigation methods. So, a fascinating journey into the world of application security awaits us.
Backend: When a request arrives before type checking
TypeScript on the server enforces contracts between layers, but the entry point, an HTTP request, is always raw data. Even if you use NestJS with decorators like @Body(), validation may be absent or incomplete.
Case 1: SQL injection via TypeORM (yes, it's possible)
Many people think that ORMs completely protect against injection attacks. But when a developer resorts to raw queries or tricky operators, TypeScript won't save them.
Vulnerable Code (Basic Case Study with Raw Data):
import { getConnection } from 'typeorm';
app.get('/users', async (req, res) => {
const { sortColumn, order } = req.query;
// We wait for sortColumn = "name", order = "ASC"
// But call raw query
const users = await getConnection().query(
`SELECT * FROM users ORDER BY ${sortColumn} ${order}`
);
res.json(users);
});
Here, the parameters are directly substituted into SQL. The attacker sends:
GET /users?sortColumn=name&order=ASC; DROP TABLE users; --
TypeScript sees sortColumn: string, and everything looks fine from its perspective. But the relational database receives two queries.
Solution: Validate allowed values and use parameterized queries or an API that doesn't allow concatenation.
import { IsIn, IsString } from 'class-validator';
import { validateOrReject } from 'class-validator';
class UsersQueryDto {
@IsIn(['name', 'email', 'createdAt'])
sortColumn!: string;
@IsIn(['ASC', 'DESC'])
order!: 'ASC' | 'DESC';
}
app.get('/users', async (req, res) => {
const dto = new UsersQueryDto();
Object.assign(dto, req.query);
await validateOrReject(dto);
const users = await userRepository.find({
order: { [dto.sortColumn]: dto.order },
});
});
This way, we guarantee that nothing but the expected columns will end up in the ORDER BY clause.
It would seem so... Elijah, what are you talking about? We already use Query Builder, these are obvious things! But I've also seen solutions where the developer inserted partially raw queries. For example:
app.get('/search', async (req, res) => {
const { q } = req.query;
const users = await userRepository.find({
where: {
name: Raw(alias => `${alias} LIKE '%${q}%'`)
}
});
res.json(users);
});
And it turns out that here the search string q is directly pasted into the SQL expression.
GET /search?q=%25'%3BDROP%20TABLE%20users%3B--
And if you still can't give up Raw SQL code inserts, the right solution would be to use parameterized placeholders (supported, for example, in TypeORM):
where: {
name: Raw(alias => `${alias} ILIKE :query`, { query: `%${q}%` })
}
Another similarly dangerous pattern is to build a query using createQueryBuilder, concatenating strings for conditions or sorting.
app.get('/users', async (req, res) => {
const { filter } = req.query; // filter = "admin'; DROP TABLE users; --"
const qb = userRepository.createQueryBuilder('user');
if (filter) {
qb.where(`user.role = '${filter}'`);
}
const users = await qb.getMany();
res.json(users);
});
String interpolation within .where() exposes the same injection opportunities as direct SQL. An attacker gains complete control over the query.
A safer alternative: use QueryBuilder parameters:
if (filter) {
qb.where('user.role = :role', { role: filter });
}
Key lesson: Any string concatenation when forming SQL is suspect, even if it is hidden behind ORM methods.
Case 2: NoSQL Injection in MongoDB with Mongoose
Even when using ODM, you can still get an injection if you pass objects directly from a query.
app.post('/login', async (req, res) => {
const { username, password } = req.body;
// req.body can contain: { username: { $ne: null }, password: { $ne: null } }
const user = await UserModel.findOne({ username, password }).exec();
if (user) {
res.json({ token: generateToken(user) });
} else {
res.status(401).send();
}
});
If the client sends JSON with MongoDB operators ($gt, $ne), the query will become { username: { $ne: null }, password: { $ne: null } } and return the first user it encounters.
Solution: explicit typing and normalization of input data using libraries like mongo-sanitize or manual validation:
function sanitizeInput(obj: Record<string, unknown>): Record<string, string> {
const clean: Record<string, string> = {};
for (const [key, value] of Object.entries(obj)) {
if (typeof value !== 'string') {
throw new Error('Invalid input type');
}
clean[key] = value;
}
return clean;
}
But it's better to use proven validators, such as Zod or class-validator, to prohibit objects with suspicious properties at the DTO level.
Raising the Bar. Case Study 3: Prototype Pollution
In Node.js, objects inherit from Object.prototype, and changing this prototype can have catastrophic consequences, ranging from logic changes to remote code execution.
An example of such code is a deep merge function:
// Danger function
function deepMerge(target: any, source: any) {
for (const key in source) {
if (typeof source[key] === 'object' && source[key] !== null) {
if (!target[key]) target[key] = {};
deepMerge(target[key], source[key]);
} else {
target[key] = source[key];
}
}
}
app.put('/settings', (req, res) => {
const userSettings = JSON.parse(fs.readFileSync('settings.json', 'utf-8'));
// Danger
deepMerge(userSettings, req.body);
fs.writeFileSync('settings.json', JSON.stringify(userSettings));
res.send('ok');
});
And if the request is:
{ "__proto__": { "isAdmin": true } }
After such a merge, any new object will have isAdmin === true. This could bypass authorization checks.
Protection: Never use recursive merges without property checks. Modern libraries (lodash.merge) offer protection, but it's safer not to use them for user data at all. It's better to explicitly define the schema:
import { z } from 'zod';
const SettingsSchema = z.object({
theme: z.enum(['light', 'dark']),
notifications: z.boolean(),
});
app.put('/settings', (req, res) => {
const parsed = SettingsSchema.safeParse(req.body);
if (!parsed.success) {
return res.status(400).json({ errors: parsed.error });
}
// Work only with parsed.data
});
Zod will automatically discard all undeclared keys, including proto and constructor.
Secure Integration of JWT and Sessions
JWT has become an industry standard, but its misuse often leads to token theft and privilege escalation.
Case 4: Lack of Algorithm Validation
Let's look at the vulnerable code:
import jwt from 'jsonwebtoken';
app.get('/profile', (req, res) => {
const token = req.headers.authorization?.split(' ')[1];
if (!token) return res.status(401).send();
const decoded = jwt.verify(token, config.publicKey);
// Attack: The attacker signs the token with the "none" or HS256 algorithm with the public key.
});
If the library doesn't specify an acceptable algorithm, you can use the "none" algorithm or a symmetric algorithm if you know the public key.
Solution: explicitly specify acceptable algorithms.
const decoded = jwt.verify(token, config.publicKey, {
algorithms: ['RS256'], // or ['ES256']
});
Additionally, never use jwt.decode() for verification. Only verify.
Case 5: Secrets in Code and Configurations
Accidentally committing a .env file with JWT_SECRET=super-secret to the repository is a classic example. TypeScript doesn't scan string contents. Use:
-
process.envand tools like dotenv-vault. - Configuration validation at startup, using Zod.
Configuration verification using Zod:
const envSchema = z.object({
JWT_SECRET: z.string().min(32),
DB_URL: z.string().url(),
});
const env = envSchema.parse(process.env);
If the variable is missing or incorrect, the application will crash on startup with a clear error.
Protecting against SSTI (Server-Side Template Injection) in template engines
If you outsource HTML rendering to the server (Nunjucks, EJS, Pug), careless passing of user input to the template can lead to code execution.
Example of vulnerability:
app.get('/hello', (req, res) => {
const name = req.query.name;
res.render('hello', { name });
});
// EJS Template: <h1>Hi <%= name %></h1>
Although <%= %> escapes HTML, in some engines it's possible to inject executable code via template engine parameters (as with { constructor: ... }). The best defense is to never pass raw input to a template without context processing and to avoid using advanced template engine features (such as eval).
If you're using Next.js or React for SSR, a similar attack can occur via dangerouslySetInnerHTML:
function Profile({ bio }: { bio: string }) {
return <div dangerouslySetInnerHTML={{ __html: bio }} />;
}
Here, TypeScript assumes that bio = string, but the variable could contain XSS.
The obvious rule, even from the method's name, is to never use
dangerouslySetInnerHTMLwith unvalidated user input, and if necessary, useDOMPurify.
Frontend: Browser Security
On the client, TypeScript gives a false sense of security. Let's look at the main attack vectors where types won't help.
Case 6: XSS via HTML injection
As shown above, passing unescaped text to innerHTML or the dangerouslySetInnerHTML JSX attribute is a direct route to XSS. But there are less obvious places.
Unsafe code in React:
function Comment({ text }: { text: string }) {
return (
<a href={`https://example.com/?q=${text}`}>
Search
</a>
);
}
// Если text = "javascript:alert(1)"
The browser will execute JavaScript on click. TypeScript is unaware of the context of the string.
Protection: URL validation and use of encodeURIComponent. A Content Security Policy (CSP) with strict directives is also a good idea.
Case 7: Sensitive Data Leaking into the Build
Environment variables (API keys, internal URLs) often leak into the client bundle because the developer used process.env.NEXT_PUBLIC_* or forgot about the server/client boundary. TypeScript doesn't distinguish between where the code will be executed.
Protection: Clearly separate environment variables. In Next.js, for example, only variables prefixed with NEXT_PUBLIC_ are accessible on the client. Everything else should only be read on the server (getServerSideProps / API Routes).
Case 8: CSRF with Mutations
If your cookies are passed automatically and your API accepts POST requests without additional validation, an attacker can trick the user into sending an unwanted request.
TypeScript won't automatically add a CSRF token. You need to implement either a synchronous token or a SameSite cookie and Origin/Referer validation.
An example of a simple check in Next.js API routes:
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
const allowedOrigins = ['https://myapp.com'];
export function middleware(req: NextRequest) {
const origin = req.headers.get('origin');
if (req.method !== 'GET' && (!origin || !allowedOrigins.includes(origin))) {
return new NextResponse(null, { status: 403 });
}
return NextResponse.next();
}
Dependencies and Supply Chain
TypeScript projects pull hundreds of packages. Each dependency can become an entry point. Typing doesn't protect against malicious code in postinstall scripts or obfuscated packages.
Specific Incident: Event-Stream
In 2018, the popular event-stream npm package was compromised: malicious code was added to it that stole cryptocurrency keys from another package. TypeScript was powerless here: the malware can be buried deep in dependencies and contain no types at all.
Protective Measures:
-
Use
npm audit, snyk, and socket.dev. - Check package licenses and reputation.
- Minimize the number of dependencies.
- Add a check for known vulnerabilities to CI/CD.
Types as an Element of Security Infrastructure
Despite all of the above, TypeScript can significantly enhance security if used consciously:
- Typed DTOs and strict interfaces. Use not just "any" types, but precise types, enumerations, and discriminated unions. This eliminates many validation errors even at the coding stage.
-
Branded types (nominal typing). For example, we can create a
SafeHtmltype that can only be accessed through a sanitization function. - Exhaustive switches and protection against incompleteness. Ensures that all possible states are handled (for example, when parsing authentication statuses).
An example of a protected SafeHtml type:
type SafeHtml = string & { readonly __brand: unique symbol };
function sanitizeHtml(input: string): SafeHtml {
return DOMPurify.sanitize(input) as SafeHtml;
}
function render(html: SafeHtml) {
document.getElementById('app')!.innerHTML = html;
}
Level Up. Five subtle modern attacks on TypeScript applications
Now we'll move on to threats that rarely make it into basic guides, but are increasingly common in real-world projects. All examples focus on the TypeScript stack.
Dependency Confusion via Typed Packages
An attacker publishes a package with an internal name to the public npm, but with a higher version. TypeScript projects are particularly vulnerable due to the habit of using @types/* or corporate naming conventions.
Example: Your company uses an internal package @mycompany/auth, which is stored in a private registry. The attacker publishes @mycompany/auth to npm with version 99.0.0 and malicious code in the postinstall. If .npmrc doesn't specify a strict scope registry, npm install will pull in the public version.
// Code from dangerous repo (index.d.ts and index.js)
export function login(login: string, password: string): boolean;
// In JS: process.env.JWT_SECRET send to the hacker server
Security:
- Configure .npmrc to link the scope to a private registry.
-
Use
npm install --prefer-offlineand block queries to the public registry for internal names at the network level. -
In the CI pipeline, check package integrity using
npm audit --audit-level=highand compare hashes.
Timing attack on string comparisons (JWT, API keys)
A classic mistake: checking tokens or keys using ===. In Node.js, string comparisons are performed byte by byte and take varying amounts of time. An attacker can measure the response and guess the token character by character.
Example of vulnerable code:
const expectedApiKey = process.env.API_KEY!;
app.post('/webhook', (req, res) => {
const apiKey = req.headers['x-api-key'] as string;
if (apiKey !== expectedApiKey) { // Danger!
return res.status(403).send('Forbidden');
}
});
If the lengths are unequal, the comparison fails immediately, but if the first character is correct, it takes a little longer. By repeating the queries with different values, the key can be recovered.
Security: Use crypto.timingSafeEqual to compare secrets.
import { timingSafeEqual } from 'crypto';
function constantTimeCompare(a: string, b: string): boolean {
const bufA = Buffer.from(a);
const bufB = Buffer.from(b);
return timingSafeEqual(bufA, bufB);
}
And be sure to normalize the length so that the pause does not give away the length of the key.
GraphQL: Introspection Abuse and Argument Injections
On the Apollo Server (TypeScript) backend, introspection is often left enabled in production. This allows an attacker to obtain the full schema and find secret mutations or fields accessible only to admins. Injection via unvalidated arguments becomes even more dangerous.
Resolver vulnerability:
const resolvers = {
Query: {
user: (_: unknown, args: { id: string }) => {
// id doesn't checked
return db.raw(`SELECT * FROM users WHERE id = '${args.id}'`);
}
}
};
Steps to protect:
- Disable introspection in production.
- Validate arguments using Zod or graphql-scalars.
Example of disabling introspection in configs:
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: process.env.NODE_ENV !== 'production',
});
Validation example:
mport { z } from 'zod';
const userIdSchema = z.string().uuid();
user: (_: unknown, args: { id: string }) => {
const id = userIdSchema.parse(args.id);
return db.query('SELECT * FROM users WHERE id = $1', [id]);
}
SSRF via URL parsing in Node.js
Many applications accept URLs from the user (for example, to import an avatar). Attackers bypass these checks using Unicode tricks or redirects.
Example of vulnerable code:
app.post('/import', async (req, res) => {
const { url } = req.body as { url: string };
const parsedUrl = new URL(url);
if (parsedUrl.hostname === 'localhost' || parsedUrl.hostname === '127.0.0.1') {
return res.status(400).send('Invalid URL');
}
const response = await fetch(url);
// ...
});
Bypass hostname verification: http://127.0.0.1:80@evil.com (the part before the @ is considered credentials, resulting in hostname = evil.com, and the request goes to 127.0.0.1).
Another example: http://0x7f.0.0.1/ (IP hex notation).
Protection:
- Don't parse the URL yourself. Use a library like is-ip or check the final IP after DNS resolution.
- Restrict the scheme to http and https. Disallow raw IP.
import { promises as dns } from 'dns';
async function resolveIp(url: string): Promise<string> {
const hostname = new URL(url).hostname;
const addresses = await dns.resolve4(hostname);
return addresses[0]; // упрощённо
}
// Checks for private ranges
// (10/8, 172.16/12, 192.168/16, 127/8)
RCE via unsafe deserialization in TypeScript
Some libraries allow functions to be serialized or eval'd during deserialization for convenience. For example, serialize-javascript (used in Next.js) is safe, but packages like node-serialize and cookie-serialize allow RCE to be replicated.
Example of vulnerable code:
import * as serialize from 'node-serialize';
app.get('/state', (req, res) => {
const state = serialize.unserialize(req.cookies.state);
// state can contain an objects with code
});
Attack example: a state cookie with a serialized object, where the rce field is: "_$$ND_FUNC$$_function(){ require('child_process').exec('rm -rf /') }".
Protection: Never use deserialization that can restore functions. Use only JSON. For example:
const state = JSON.parse(req.cookies.state || '{}');
If complex types are required, use zod for validation after JSON.parse, but do not run the code. Any imports of libraries with extended serialization should be prohibited.
A practical security checklist for a TypeScript project
For the backend:
- Validate all incoming data via Zod / class-validator / io-ts. No any or as.
- Parameterized database queries, no string concatenation (even within Raw() and QueryBuilder methods).
-
Clean objects from
protoandconstructor(or use safe map/reduce). - Fixed JWT algorithms, short token lifetimes, and refresh tokens with rotation.
- Secure CORS settings (no * with credentials).
- Logging without token/password leaks.
- Helmet-like middleware.
For the frontend:
-
No
dangerouslySetInnerHTMLwithoutDOMPurify. - CSP headers prohibiting inline scripts.
-
Proper use of
encodeURIComponentand URL validation. - Separation of sensitive environment variables: only what is truly needed is included in the client code.
- CSRF protection: SameSite=Strict/Lax, Origin check, tokens for state-changing requests.
- Regular and very careful updating of dependencies.
General practices:
- Linter with security rules (eslint-plugin-security).
- Static analysis with type checking, but not excessively so; remember that any casting breaks security.
- Runtime type checks (ts-runtime, type guards) for server data, as the API response may also be different from what you described in the interface.
Conclusion
TypeScript is a truly powerful tool, but it's not a bodyguard. Strong typing reduces bugs and makes code more predictable, but it doesn't eliminate classic web vulnerabilities. Today, we examined real-world examples where the compiler is completely blind to the dangers: from SQL substitution (even through high-level TypeORM operators) to prototype pollution, timing attacks, and deserialization.
Furthermore, the last five cases demonstrate that attacks are adapting to modern technologies, and defenses must evolve.
The main takeaway: think of types as the foundation on which you build a multi-layered security system. Validate everything at the boundaries of trust, never trust the client, and remember that any is not a type, but a security hole.
Security is a process, not a final state. Make your TypeScript not only strict but also secure.
Thanks for reading.
What other types of vulnerabilities would you like to explore, perhaps in more depth and from less obvious perspectives?


Top comments (0)