As part of my ongoing rebuild of fundamentals, Day 5 moved away from solving CTF machines and into something far more educational:
I created my own vulnerable machine.
For the past few days I had been identifying bugs in other people’s systems. Today I wanted to understand something deeper:
Not just how to exploit vulnerabilities
but how developers accidentally create them.
The focus was Server-Side Template Injection (SSTI) using Flask and Jinja2.
Instead of immediately making a vulnerable application, I deliberately built two versions:
- A secure implementation
- A vulnerable implementation
This made the lesson much clearer than reading theory. I wasn’t memorizing payloads anymore. I was watching security break in real time.
What is SSTI?
Normally, a web server renders HTML templates and inserts user data into them.
Example:
Hello {{ username }}
The server replaces {{ username }} with text.
But if user input becomes part of the template itself, the server stops rendering a page and starts executing instructions.
Instead of:
User → sends text
Server → prints text
You get:
User → sends template code
Server → executes it
That is Server-Side Template Injection.
In Jinja2, expressions inside {{ }} are evaluated as Python-accessible objects.
So if an attacker controls what appears inside those braces, they may reach the Python runtime.
Goal of the Lab
I created a minimal “blog message page”:
A user submits:
- name
- message
The server displays it back.
Simple enough to understand… but powerful enough to demonstrate a full compromise chain.
This small design let me practice:
- Flask routing
- request handling
- Jinja templating
- and most importantly — secure vs unsafe patterns
Part 1 — The Secure Version
I first built the correct implementation.
app_secure.py
from flask import Flask, request, render_template
app = Flask(__name__)
@app.route('/')
def index():
return """
<form action="/post" method="GET">
Name: <input type="text" name="name"><br>
Message: <input type="text" name="msg"><br>
<input type="submit">
</form>
"""
@app.route('/post')
def post():
name = request.args.get('name', 'Anonymous')
msg = request.args.get('msg', '')
return render_template("message.html", name=name, msg=msg)
if __name__ == '__main__':
app.run(debug=True)
templates/message.html
<!DOCTYPE html>
<html>
<head><title>Message</title></head>
<body>
<h1>New Message</h1>
<b>{{ name }}</b> says:
<p>{{ msg }}</p>
</body>
</html>
Why This Version is Secure
render_template() loads a static template file and passes variables separately.
Jinja2 treats name and msg as data.
It also auto-escapes HTML.
Test Payload
/post?name=test&msg={{7*7}}
Output:
{{7*7}}
No evaluation.
No injection.
Jinja refused to interpret the payload because the template structure was trusted and the input was untrusted.
This was my first important realization:
Flask is secure by default.
I didn’t need to add filters, sanitizers, or regex.
The framework’s design already prevented the vulnerability.
Part 2 — Intentionally Introducing the Vulnerability
Now I broke it.
I replaced render_template() with render_template_string() and constructed the template using an f-string.
app_vulnerable.py
from flask import Flask, request, render_template_string
app = Flask(__name__)
@app.route('/')
def index():
return """
<form action="/post" method="GET">
Name: <input type="text" name="name"><br>
Message: <input type="text" name="msg"><br>
<input type="submit">
</form>
"""
@app.route('/post')
def post():
name = request.args.get('name', 'Anonymous')
msg = request.args.get('msg', '')
template = f"""
<!DOCTYPE html>
<html>
<head><title>Message</title></head>
<body>
<h1>New Message</h1>
<b>{name}</b> says:
<p>{msg}</p>
</body>
</html>
"""
return render_template_string(template)
if __name__ == '__main__':
app.run(debug=True)
What Changed?
Instead of:
Template + Variables
I created:
User Input → Python String → Jinja Template
This small change is the entire vulnerability.
The application now compiles a template at runtime using user data.
Verification
Test again:
/post?name=test&msg={{7*7}}
Output:
49
The server evaluated the expression.
The application was no longer a webpage.
It had become a remote Python expression interpreter.
Proof of Server Access
Next payload:
/post?name=test&msg={{config}}
The page printed Flask configuration values.
That means the attacker can access server-side objects, not just HTML output.
This is the critical difference between:
XSS → affects users
SSTI → affects the server
Understanding the Vulnerability Chain
Here is what actually happened internally:
- User input enters Python
- Python f-string embeds it
- Jinja parses the string
- Jinja evaluates expressions
- Expressions access Python objects
Flow:
HTTP Request
↓
Python String
↓
Jinja Template Engine
↓
Python Runtime
↓
Operating System
SSTI is dangerous because the attack does not stay in the web layer.
It crosses directly into the application’s execution environment.
Why the Bug Exists
Individually, these are safe:
- Python f-strings
- Jinja templates
Combined incorrectly:
They form a code execution pipeline.
The vulnerability was not caused by a complex algorithm or memory corruption.
It was caused by a design decision.
Key Lessons Learned
1. Framework Defaults Matter
Flask encourages safe patterns. Deviating from them introduces risk quickly.
2. Never Build Templates from User Input
Templates must be static. Only variables should be dynamic.
3. SSTI is More Than a Web Bug
It is often a full system compromise entry point.
4. Security is About Context
A tool is not insecure by itself.
It becomes insecure when used outside its intended model.
Top comments (0)