Most security engineering interviews have a technical challenge. Sometimes it’s a take-home CTF, sometimes they spin up a VM and hand you an IP. For this one, a company sent me a GitHub Gist with a handful of Python files and said: “A team wants to deploy this. Tell us if it’s safe to launch.”
So I read the code. Then I dockerized it. Then I broke it nine different ways.
This is what that process looked like, start to finish. If you’re prepping for AppSec interviews or you write Flask apps, there’s probably something in here for you.
A Flask Employee Directory. Users sign up with a name, bio, email, and profile picture URL, then browse other employees. Flask backend, SQLite database, Jinja2 templates, bcrypt for passwords. Six Python/template files and a CSS stylesheet.
app.py - routes, User model, middlewareauth.py - password hashing, sessions, employee lookupconfig.py - environment-based configuration (local, test, production)pictures.py - profile picture download*.jinja2 - templates (signin, signup, directory, base)style.css and database.dbNo running instance was provided. Just source code.
I opened config.py first. That file tells you everything about how the app is wired: what secrets exist, what modes are available, how the environment gets selected. Then app.py for routes and input handling, auth.py for the authentication logic, and pictures.py for anything touching the filesystem or network.
Most of what I found came from those four files. After the code review I dockerized the app on port 5001 and verified every finding with a working exploit.
| Severity | Count | Findings |
|---|---|---|
| CRITICAL | 4 | RCE, Path Traversal, SSRF, XSS |
| HIGH | 2 | Hardcoded secrets, Static bcrypt salt |
| MEDIUM | 3 | Missing CSRF, Account enumeration, Outdated jQuery |
| CVSS 3.1: 9.9 | CWE-489 (Active Debug Code), CWE-1188 (Insecure Default Initialization) |
The first thing I read in config.py was this:
# config.py
class BaseConfig:
DEBUG = True # Insecure default - inherited by all configs
config_map = {
"default": BaseConfig, # Falls back here when FLASK_ENV unset/invalid
"local": LocalConfig,
"test": LocalConfig,
"production": ProductionConfig, # Only this has DEBUG=False
}
def configure(app: Flask):
app.config.from_object(
config_map.get(os.getenv("FLASK_ENV"), config_map["default"]) # Defaults to insecure
)
ProductionConfig sets DEBUG=False, but it’s the only config that does. BaseConfig has DEBUG=True, and that’s what everything inherits from. It’s also the fallback default. So:
FLASK_ENV unset (common in rushed deployments)? Debug mode.FLASK_ENV has a typo, like "prod" instead of "production"? Debug mode.FLASK_ENV set to literally anything not in the map? Debug mode.The only way to get secure behavior is to set the variable to exactly "production". Every other possibility, including forgetting to set it at all, results in the Werkzeug interactive debugger being exposed to the network.
Obviously, debug being on in my local Docker container isn’t a finding. That’s expected for development. The problem is the code’s architecture: the default path leads to insecure. If this ships to production and someone forgets one env var, or makes one typo, the app is serving an interactive Python console to the internet.
The Werkzeug debugger PIN gets printed to stdout on startup. Anyone with access to logs (Docker, CloudWatch, a compromised log aggregator, or even the path traversal bug I found later) can grab it:

Punch in the PIN at /console and you get a full Python REPL. In this container the process ran as root:
import os
os.popen('whoami').read() # Returns: 'root\n'
os.popen('id').read() # Returns: 'uid=0(root) gid=0(root) groups=0(root)\n'
os.listdir('/app') # List application files
open('/etc/passwd').read() # Read system files

Flip the default. Make production the baseline and require developers to explicitly opt into debug mode:
class BaseConfig:
DEBUG = False # Secure default - fail closed
class LocalConfig(BaseConfig):
DEBUG = True # Explicitly opt-in for local development only
class ProductionConfig(BaseConfig):
DEBUG = False
def configure(app: Flask):
env = os.getenv("FLASK_ENV", "production") # Default to production
if env not in config_map:
raise ValueError(f"Invalid FLASK_ENV: {env}") # Fail on typos
app.config.from_object(config_map[env])
Now a missing FLASK_ENV defaults to production. A typo raises an error. Debug mode only turns on when you ask for it.
| CVSS 3.1: 9.1 | CWE-22 (Path Traversal) |
The catch-all route has no authentication check and no path validation:
@app.route("/<path:path>")
def catch_all(path: str): # NO AUTHENTICATION CHECK
"""Serve static assets like favicon.ico"""
return send_file(path) # NO PATH VALIDATION
Anyone on the network can request any file the application process can read. No login required.
Downloading the database is one curl command:
curl http://localhost:5001/database.db -o database.db

URL-encoded dots get you outside the web root:
curl "http://localhost:5001/%2e%2e/%2e%2e/%2e%2e/etc/passwd"

The best option is to delete the catch-all entirely and let Flask serve static files the way it was designed to:
# Option 1 (Best): Use Flask's static folder, delete the catch-all route
app = Flask(__name__, static_folder='static', static_url_path='/static')
# Option 2: Whitelist + authentication
from werkzeug.security import safe_join
ALLOWED_STATIC_FILES = {'favicon.ico', 'style.css'}
@app.route("/<path:path>")
@login_required
def catch_all(path: str):
if path not in ALLOWED_STATIC_FILES:
abort(404)
safe_path = safe_join(app.static_folder, path)
if safe_path is None:
abort(404)
return send_file(safe_path)
| CVSS 3.1: 9.3 | CWE-918 (SSRF) |
During signup, users provide a URL for their profile picture. The server fetches whatever you give it. No scheme validation, no hostname check, no content-type verification:
def download_picture(url: Optional[str], pictures_dir: str):
Path(pictures_dir).mkdir(exist_ok=True)
res = requests.get(url) # User controls ENTIRE URL - no validation
filename = Path(pictures_dir) / f"{uuid4()}.jpg"
with open(filename, "wb") as fp:
fp.write(res.content) # Arbitrary content saved as .jpg
return str(filename)
You control the scheme, the host, the port, the path. The server will make the request and save whatever comes back.
I created an SVG file with a JavaScript payload, hosted it on a Python HTTP server on my machine, and submitted the URL as my profile picture:

The HTTP server logs confirmed the app fetched it:

Then I pulled the stored file back out (using the path traversal bug from the previous finding) and confirmed the raw SVG was saved as-is. No conversion, no validation. The .jpg extension is cosmetic:

This feeds directly into the XSS finding below. And in a cloud environment, the same mechanism reaches metadata endpoints:
# SSRF to internal endpoints
curl -X POST http://localhost:5001/signup \
-d "[email protected]&name=Test&bio=Bio&password=pass&picture=http://127.0.0.1:5001/database.db"
# In AWS/GCP/Azure:
# picture=http://169.254.169.254/latest/meta-data/
# picture=http://metadata.google.internal/computeMetadata/v1/
Validate the scheme, resolve the hostname, block RFC 1918 ranges, check content-type, and verify the response is actually an image:
import ipaddress
import socket
from urllib.parse import urlparse
BLOCKED_IP_RANGES = [
ipaddress.ip_network('10.0.0.0/8'),
ipaddress.ip_network('172.16.0.0/12'),
ipaddress.ip_network('192.168.0.0/16'),
ipaddress.ip_network('127.0.0.0/8'),
ipaddress.ip_network('169.254.0.0/16'),
]
def download_picture(url: Optional[str], pictures_dir: str) -> Optional[str]:
if not url:
return None
parsed = urlparse(url)
if parsed.scheme not in ('http', 'https'):
raise ValueError("Invalid URL scheme")
# Resolve and check IP against blocked ranges
try:
ip = socket.gethostbyname(parsed.hostname)
ip_obj = ipaddress.ip_address(ip)
for blocked_range in BLOCKED_IP_RANGES:
if ip_obj in blocked_range:
raise ValueError("Blocked IP address")
except (ValueError, socket.gaierror):
raise ValueError("Invalid hostname")
response = requests.get(url, timeout=5, allow_redirects=False, stream=True)
# Verify content type
content_type = response.headers.get('Content-Type', '')
if not content_type.startswith('image/'):
raise ValueError("Not an image")
# Validate it's actually an image
from PIL import Image
from io import BytesIO
content = response.content
img = Image.open(BytesIO(content))
img.verify()
Path(pictures_dir).mkdir(exist_ok=True)
filename = Path(pictures_dir) / f"{uuid4()}.jpg"
with open(filename, "wb") as fp:
fp.write(content)
return str(filename)
| CVSS 3.1: 9.3 | CWE-79 (XSS), CWE-116 (Improper Encoding) |
This one has a root cause that’s easy to miss. Flask’s select_jinja_autoescape() function only enables auto-escaping for .html, .htm, .xml, and .xhtml files. Every template in this application uses the .jinja2 extension. That extension isn’t in the list. Auto-escaping is off across the board.
# Flask 0.12.2 behavior
>>> app.select_jinja_autoescape('signin.jinja2')
False # Auto-escape DISABLED for .jinja2 files
>>> app.select_jinja_autoescape('signin.html')
True # Auto-escape enabled for .html files
Every user-controlled variable renders as raw HTML:
| Template | Variable | XSS Type |
|---|---|---|
signin.jinja2 |
`` (from flash) | Reflected |
signup.jinja2 |
`` (from flash) | Reflected |
directory.jinja2 |
`` | Stored |
directory.jinja2 |
`` | Stored |
directory.jinja2 |
`` (src attr) | Stored |
directory.jinja2 |
`` | Stored |
Script tag in the bio field during signup fires on every page load for every user who views the directory:

The reflected variant goes through error messages:
curl -X POST http://localhost:5001/signin \
-d "email=<script>alert(document.cookie)</script>&password=test"
# Response HTML:
# <div class="alert alert-danger" role="alert">
# User with <script>alert(document.cookie)</script> does not exist
# </div>
And remember the SSRF finding? An attacker can store an SVG with embedded JavaScript as a “profile picture.” The browser renders SVG based on content, not file extension, so the .jpg extension doesn’t matter.
Rename the templates:
mv signin.jinja2 signin.html
mv signup.jinja2 signup.html
mv directory.jinja2 directory.html
mv base.jinja2 base.html
# Update render_template() calls in app.py accordingly
Or force auto-escaping for .jinja2 files:
from jinja2 import select_autoescape
app.jinja_env.autoescape = select_autoescape(
enabled_extensions=("html", "xml", "jinja2"),
default_for_string=True
)
| CVSS 3.1: 8.2 | CWE-798 (Hardcoded Credentials) |
Both secret keys are right there in the source:
class BaseConfig:
SECRET_KEY = "0000000000000000000000000000000000000000000=" # Trivially weak
class ProductionConfig(BaseConfig):
SECRET_KEY = "s337eH1wD9rb42dIb4QsfcTghAWLE5c2DIt3ROpVjv4=" # Committed to source
Pair this with the path traversal bug and an unauthenticated user can just pull config.py off disk:
curl http://localhost:5001/config.py | grep SECRET_KEY

With the secret key an attacker can forge Flask session cookies, impersonate any user, and bypass all authentication.
import os
class BaseConfig:
SECRET_KEY = os.environ.get('SECRET_KEY')
@classmethod
def validate(cls):
if not cls.SECRET_KEY:
raise ValueError("SECRET_KEY environment variable required")
class ProductionConfig(BaseConfig):
SECRET_KEY = os.environ['SECRET_KEY'] # Raises KeyError if missing
# Generate a secure key:
# python -c "import secrets; print(secrets.token_hex(32))"
| CVSS 3.1: 8.5 | CWE-760 (Predictable Salt) |
Every password in the database is hashed with the same salt:
# config.py
BCRYPT_SALT = b"$2b$12$BuJdiyOo2JcUDFgYFhU6Lu" # Same for ALL users
# auth.py
def hash_password(plaintext: str, salt: bytes) -> str:
return bcrypt.hashpw(bytes(plaintext, "ascii"), salt).decode("ascii")
Same salt for everyone means identical passwords produce identical hashes. You can spot password reuse across accounts just by eyeballing the database. It also means you can pre-compute a rainbow table for this specific salt and crack every password in parallel instead of one at a time.
Bonus bug: bytes(plaintext, "ascii") throws a UnicodeEncodeError on any password with non-ASCII characters. Accented letters, CJK characters, emoji, all broken.

sqlite3 database.db "SELECT email, password_hash FROM user;"
# [email protected]|$2b$12$BuJdiyOo2JcUDFgYFhU6Lu...
# [email protected]|$2b$12$BuJdiyOo2JcUDFgYFhU6Lu...
# ^^^^^^^^^^^^^^^^^^^^^^^^ Identical

Let bcrypt do what bcrypt was designed to do. Generate a unique random salt per password:
import bcrypt
def hash_password(plaintext: str) -> str:
return bcrypt.hashpw(
plaintext.encode('utf-8'), # UTF-8 for full Unicode support
bcrypt.gensalt(rounds=12) # Unique salt per password
).decode('utf-8')
def check_password(plaintext: str, password_hash: str) -> bool:
return bcrypt.checkpw(
plaintext.encode('utf-8'),
password_hash.encode('utf-8')
)
# After deploying: force password resets for all users
| CVSS 3.1: 5.4 | CWE-352 (CSRF) |
Every form in the application submits without a CSRF token:
<form action="" method="post">
<!-- No CSRF token -->
<input type="email" name="email" ...>
<input type="password" name="password" ...>
</form>
Fix: Add Flask-WTF:
from flask_wtf.csrf import CSRFProtect
csrf = CSRFProtect(app)
<form method="post">
<input type="hidden" name="csrf_token" value="">
...
</form>
| CVSS 3.1: 5.3 | CWE-204 (Observable Response Discrepancy) |
The login form tells you whether the email exists:
if user:
if auth.check_password(password, user.password_hash, ...):
auth.set_user(user.id)
else:
flash("Incorrect password") # Reveals: user EXISTS
else:
flash(f"User with {email} does not exist") # Reveals: user DOES NOT exist
The {email} interpolation also feeds directly into the XSS bug.
Fix:
if not user or not auth.check_password(password, user.password_hash, ...):
flash("Invalid email or password")
| CVSS 3.1: 6.1 | CWE-1104 (Unmaintained Third Party Components) |
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
Three known CVEs:
| CVE | Description |
|---|---|
| CVE-2020-11022 | XSS via HTML passed to DOM manipulation methods |
| CVE-2020-11023 | XSS via HTML containing <option> elements |
| CVE-2019-11358 | Prototype pollution via jQuery.extend(true, ...) |
Fix:
<script src="https://code.jquery.com/jquery-3.7.1.min.js"
integrity="sha256-/JqT3SQfawRcv/BIHPThkBvs0OEvtFFmqPF/lYI/Cxo="
crossorigin="anonymous"></script>
Not everything was broken. The app used bcrypt (even if the salt was static), had a logical separation between config environments, and followed a clean enough structure that the issues were all fixable without a rewrite. Most of the fixes above are one-line to ten-line changes.
In an interview, calling out what works is part of the assessment. It shows you’re evaluating the whole picture, not just fishing for things to flag.
This is the part interviewers actually care about. Individual bugs are table stakes. The question is whether you can see how they chain together.
Path traversal lets an unauthenticated user download config.py, which contains the SECRET_KEY. With the secret key you can forge session cookies and impersonate any user. SSRF lets you store a malicious SVG as a profile picture. Because auto-escaping is disabled on .jinja2 templates, that SVG’s JavaScript executes in every user’s browser when they view the directory. The static bcrypt salt means that if you pull the database (also via path traversal), cracking one password cracks every account that shares it.
Nine findings. But really one attack chain.
A few things I’ve picked up from going through this type of challenge at multiple companies.
They care about how you work, not just what you find. Opening config.py before touching a browser tells the interviewer you have a methodology. Explaining why send_file() with user input is dangerous (not just pointing at it) shows depth.
Dockerize it yourself. When they hand you source without a running instance, spinning it up on your own shows initiative. Screenshots of working PoCs land differently than theoretical descriptions.
Write remediation, not just findings. Anyone can grep for DEBUG=True. Showing the secure-by-default pattern, or knowing that Flask’s auto-escape depends on file extension, is what separates a vulnerability list from an actual code review.
Think in chains. Path traversal alone is bad. Path traversal that leaks the secret key, which enables session forgery, which gives access to a directory full of stored XSS? That’s a story. Interviewers want to see you connect the dots.
None of these bugs are novel. They’re all on the OWASP Top 10 and they show up in CTFs constantly. That’s the point. These challenges test whether you can find common issues systematically, explain them clearly, and recommend fixes that actually address the root cause.