In this series of articles about agents in cybersecurity, specifically pentesting, I will examine and describe various online projects, explain their operating principles, test their functionality, verify the quality of the results, and summarize. In parallel, we will develop our own pentesting agent from start to finish, testing various language models, both for local use and those accessible only via the API. We will also train our own large language model for pentesting.
In this article, I want to analyze a simple project. https://github.com/faizann24/rogue
1. Technology set
There is no release number on repository that why I mark that we use version from latest commit from Jun 18 2025.
So, judging by the project description, it is positioned as an agent for AUTOMATIC WEB scanning, most likely targeting web applications, not the web as a general concept.
It is written in Python. OpenAI is used as a large language model; other models cannot be used because OpenAI usage is hardcoded in the llm.py file. The gpt-4o and o4-mini models are used. As can be seen from the code in this file, there was an attempt to use Anthropic models, but it never went beyond the class declaration. The entire web application system is built using the Playwright testing framework.
Now, regarding the agent features used by this project, these include tools, a system prompt describing the agent's role, and a knowledge base. Let's describe each of these in more detail. The tools are located in the tools.py file. Part of the tools is a call to Playwright functionality, a method for running Python code generated by the language model, as well as unfinished functions and a method that attempts to obtain output from the tools used.
The system prompts vary depending on the steps. The llm system prompt describes it as a security check agent, adds some knowledge about SQL injection and other hacking hints, describes Playwright functions, and describes their output.
Checks for subdomains or passwords are done in the most stupidly simple way possible, simply checking data from a custom collection in the list directory.
The description claims that the system operates automatically, but this is not true. The system has a tool that requests information from the user, and not only is it unknown when the system will stop and wait for a response, but the function itself is simply a stub that doesn't use the entered data.
2.Description of the system operation
A general description of the application's workflow is shown in the diagram. Briefly, when run.py is launched, an Agent object is created. When it is initialized, a simple knowledge base is created based on data from third-party websites. In the diagram, these are pentestmonkey websites, but there are others, as can be seen in the knowledge_fetcher.py file. This data is passed to the LLM object, the scheduler is initialized with its system prompt (I've highlighted it in red), and finally, the Reporter is initialized.
Based on the parsed page, self.scanner.scan(url) generates a page for analysis through Summarizer with its system prompt. This involves two stages: simple parsing and final parsing through LLM, where it attempts to find the required elements.
Based on the findings, a scan plan is generated and executed until it has completed all the plan's steps. The system analyzes, uses tools and their outputs, and creates an intermediate summary using its system prompt.
At the end, a final report is generated with its own system prompt.
3.Practical testing
3.1 Testing Environment
For this study, I used the well-known Metasploitable 2 project, which includes mutillidae, a web application for testing web vulnerabilities. I chose the simplest XSS vulnerability. For this, I booted a machine on my closed network and selected the following page for testing. http://192.168.127.166/mutillidae/index.php?page=add-to-your-blog.php
the simplest vulnerability is at work here <script>alert("a")</script>
3.2. Launch technique
Run with command
python run.py -u http://192.168.127.166/mutillidae/index.php?page=add-to-your-blog.php
by the default LLM model is o4-mini
3.3. Observation and analysis of the process
We see that the system has selected the o4-mini model, and the data from the RAG is disabled.
The system saw that JS was triggered and realized it needed to run an XSS check, but it immediately started doing complex checks and going down the wrong path. There were attempts to do Stored XSS Injection via Client-Side Filter Evasion but absolutely illiterate with simple vulnerability, which led to failure fill("textarea[name='content']", "<img src=x onerror=alert(1)>")
System tried Stored XSS Injection via Alternative Payload Encoding
During the scan, for some reason, it jumped to another page.
It took 10-15 minutes to scan one page, but if you need to scan 20-30 pages, that's hours per site. In real life, you're given a client network with several web services and a bunch of other services that need to be scanned within a reasonable time, and no one will wait a week.
The report was completely disappointing, even though I saw that the attack was successful when the code was executed.
but it was not saved and the report turned out to be empty on vulnerabilities
The second time the system was launched with a simple RAG
And I saw the coveted XSS
But the result didn't change much, and even got worse. The vulnerability that was found wasn't even tested this time at the same stage, only at the very end.
And the system asked for user actions.
This didn't help either, requiring user input. And the final report ended up empty of useful data.
As tested, the system has numerous issues, such as failure to record successful actions, complex plans that confuse the system in simple scenarios, lack of full automation, the need for interactive user interaction, and others. All of these issues have been resolved in our SxipherAI system.
Summary
• The system is not fully automated.
• The system uses only OpenAI.
• It cannot fully test web applications; it attempts XSS, CSF, and SQL injection, but all to no avail.
• Even with the most basic testing labs, which even a novice pen tester could handle, the system failed.
• The project was abandoned without ever reaching some logical conclusion.
• There were attempts to use tools, Playwright, and some Summari, but it was useless.
In upcoming lessons, we'll demonstrate how the SxipherAI system handles such websites, specifically how SxipherAI handles XSS vulnerabilities, and develop our own simple agent for checking for XSS vulnerabilities.
And next up will be PentestGPT.
Top comments (0)