In the intricate world of software engineering, there's a unique challenge that often feels like deciphering an ancient manuscript: reviewing a colossal codebase.
Imagine being handed the digital equivalent of Tolstoy's "War and Peace" and being told, "Find any inconsistencies or problems with this text; and by the way, it's written in the original Russian." A monumental task for sure, but with the right tools and techniques, it becomes an engaging puzzle waiting to be solved.
Among the modern tools that have transformed this process is ChatGPT by OpenAI. This article will guide you through the multi-faceted journey of dissecting a vast codebase, and how ChatGPT, combined with manual efforts, can enhance your productivity to produce a professional, high-quality analysis report.
Diving Deep: The Manual Review
Every great journey begins with a single step. In the case of codebase review, it starts with a manual examination. This is the phase where engineers immerse themselves in the code, line by line, understanding its structure, logic, and nuances. It's akin to reading a novel, where you get to know the characters (variables), follow the plot (logic), and occasionally stumble upon a twist (bugs).
Harnessing Code Search Tools with the magic of Predictive Analysis
Once familiar with the general landscape, it's time to bring in the heavy machinery. Tools like the silver searcher
(or ag
) act as the magnifying glass, helping engineers zoom into specific sections, patterns, or anomalies in the code. Think of it as having a superpower that lets you instantly find any word or phrase in a vast library. It streamlines the process, making it more efficient and less prone to human oversight.
One of the most intriguing ways ChatGPT was harnessed during the review process was in predictive analysis for code search. Instead of relying solely on traditional methods of searching for known issues or patterns, we used ChatGPT to generate potential strings or patterns that might yield interesting results. By feeding ChatGPT with context about the codebase, its purpose, and known vulnerabilities in similar projects, we were provided with a series of unique search strings that did indeed result in discovery of relevant issues.
These AI-generated strings often led to discovering unconventional patterns, hidden redundancies, or even overlooked vulnerabilities. It was like having a seasoned detective whispering clues in the ear of an investigator, guiding them towards leads they might not have considered. This proactive approach, powered by ChatGPT, added an extra layer of depth to the review and made the experience much more enjoyable.
The Guardian at the Gates: Static Analysis with Snyk
After the manual and search-assisted analysis, the code userwent a static analysis using Snyk's CLI. This is the secondary phase where the code is statically analyzed for vulnerabilities, security breaches, and potential threats.
It's like having a sentinel that watches over a fortress, ensuring no intruders can breach the walls. Snyk's CLI provides a comprehensive report, highlighting areas that need attention and offering solutions to fortify the code. It didn't catch everything, but when used in conjunction with other methods, it is a powerful tool in your code-review tool belt.
Crafting the Magnum Opus: Writing the Analysis Report
Now, with a plethora of information and insights gathered, it's time to compile it all into a cohesive report. This is where ChatGPT shines. By feeding the model with our raw notes from the first round of analysis and our subsequent findings, it can generate a draft report in a coherent and structured manner. We utilized multi-shot prompting to refine the draft to match the desired tone and general content.
However, the magic truly happens when the experience of a seasoned engineer meets the efficiency of AI. We refined the draft manually - adding insights, details, and expertise.
The result? A comprehensive analysis report that's both precise and accessible, while striking the right tone based on the content of the report (scary and stern, professional and matter-of-fact, or professional-but-with-good-news).
The synergy between ChatGPT and manual additions and edits ensures that the report is not just a bland collection of findings, but a well-crafted document that tells the story of the codebase, its strengths, its vulnerabilities, and remediation steps. The important thing to note is that everything generated by AI is reviewed and re-reviewed by an engineer to ensure accuracy and appropriateness for the goal of the review. Nothing written was allowed into the report unchecked.
Conclusion
Reviewing a mammoth codebase is no small feat. It requires patience, expertise, and the right tools. While manual reviews lay the foundation, tools like 'the silver searcher' and Snyk's CLI enhance the process.
When it comes to presenting the findings, the collaboration between ChatGPT and human expertise creates a report that's both insightful and engaging. In this modern age, the fusion of human intelligence and AI is truly revolutionizing the way we approach challenges in software engineering.
Prompt References
Here are a few slightly modified prompts I used for the report generation:
Read the following notes from a ruby application code audit
that focuses on security, and write a report using a formal,
serious, scary tone that can be presented to a client:
<raw notes here>
write a brief report in a few paragraphs that explains why
the use of (technology) for (something) is a security risk,
and expect that it will be provided to someone with limited
technical knowledge; however, the report should sound
serious enough to scare them appropriately as this is a
serious matter. it should also include specifics, and some
technical information as well.
that was excellent. in exactly the same tone and style,
add a section to the previous response that informs the
client (the reader) that the source code also contains
(a serious security issue), and why it is such an extreme
security risk.
Top comments (0)