API testing is one of those tasks every developer knows is essential, but few enjoy. Manually writing test cases for every endpoint is repetitive, error-prone, and consumes valuable time that could be spent building features. Edge cases are often skipped, test coverage suffers, and teams frequently find themselves maintaining brittle scripts.
That’s where automation changes the game. By pairing Cursor, an AI-powered coding assistant, with Requestly's local-first API testing and mocking platform, you can offload the grunt work of writing tests to AI while keeping execution secure and reproducible on your own system. In this article, we’ll walk through how to set up Cursor with Requestly, generate test cases automatically, and run them end-to-end so that you can focus less on boilerplate and more on shipping features.
Why Automate API Test Case Generation?
Traditionally, writing API test cases involves manually scripting tests using the request and response schema. This process is slow, error-prone, and often results in incomplete coverage because developers don’t have time to test every possible scenario.
The challenges include:
Manual scripting overhead: Every endpoint and method must be coded by hand.
Incomplete coverage: Edge cases and negative tests are frequently skipped.
High setup cost: Establishing a reusable test framework requires a significant amount of time.
This is where Cursor provides a significant advantage. Because it’s an AI-powered coding assistant, Cursor can quickly generate test cases based on endpoint definitions, documentation, or even just example payloads. It understands context and can suggest multiple variations, including edge and error scenarios, without requiring hours of manual coding.
With Requestly's Local Workspaces, integrating Cursor becomes a breeze. Thanks to the local workspaces, all your requests are stored in the local filesystem as JSON files. For example, the following request:
becomes a JSON file:
{
"name": "Login",
"request": {
"type": "http",
"url": "{{base_url}}/user",
"scripts": {
"preRequest": "",
"postResponse": "rq.test(\"Request is successful (2XX)\", () => {\n rq.response.to.be.ok\n});"
},
"method": "GET",
"queryParams": [],
"headers": [],
"body": null,
"contentType": "text/plain",
"auth": {
"currentAuthType": "INHERIT",
"authConfigStore": {}
}
}
}
Note that the post-response script is stored in the scripts.postResponse key as a string.
Unlike other Cloud-based tools like Postman, the local nature of Requestly means you can feed the files directly to Cursor, and Cursor can generate test cases and edit the files. These files can be checked into version control. With this setup, teams can automate the generation of API tests, and can collaborate and review the test files, thereby saving a huge amount of time.
API Testing with Requestly and Cursor
As an example of a practical API to test, we'll be using the GitHub REST API.
To follow along with this tutorial, you'll need:
- Git installed on your computer.
- A GitHub Personal Access Token (PAT). Choose a repository where you have write access and give it read-write access for issues.
- The Requestly desktop app.
- Cursor.
Setting up Requestly
I have already created a repo that contains a Requestly collection. Clone the repo to your machine.
Start Requestly and click on the Workspaces dropdown in the top-left corner and select + Add. Select the directory where you cloned the app, and write requestly-demo as the name of the workspace. This will load the workspace of the same name that already exists in the directory.
Once the workspace is ready, navigate to the APIs tab, where you'll find the prepared GitHub API collection. In the Environments tab, you'll find an Environment named "Dev". Update the variables with your credentials:
-
token: Your PAT -
owner: Your GitHub username -
repo: The name of the repo that you chose
Note: Since the environments store the PAT in plaintext, It's recommended to add the
requestly-demo/environmentsdirectory to .gitignore. Or, you could use a pre-commit hook to remove the PAT from the environment JSON upon commiting.
Setting up Cursor Rules
By default, Cursor can generate test cases for requests, since Requestly uses the commonly used Chai.js syntax. However, Requestly adds a few syntactic sugars and abstractions on top of the Chai.js library. We can inform Cursor about these features and provide it with some instructions by creating a project rule.
Create a .cursor directory in the root of the project and create a file named test-rules.mdc inside it with the following content. These instructions tell Cursor everything it needs to know for generating tests.
As you can see, we have iterated on some of the testing features provided by Requestly and have also provided some custom instructions. The globs: requestly-demo/**/*.json line tells Cursor to automatically load this rule whenever you're working with one of the Requestly files.
Since we're using the GitHub API, which is a well-documented public API, Cursor is already aware of the request and response structures. However, if you're testing your own API, it's a good idea to let Cursor know about your schemas by referencing your OpenAPI specification file. You can reference files in the rules by using @filename. For example, you can add a line like so:
Use the OpenAPI specification for request and response structure: @schema.json
Generating Tests with Cursor
Now that the Cursor rules are in place, let's test it out. Open the requestly-demo/GitHub REST API/a862b625-82ce-4ad2-9c41-4cda55e1bb5e.json file, which creates an issue in your repo.
Write "Generate test cases for this request. Show the test cases in this window before editing the file. Wait for my confirmation before editing the file" and hit enter.
Cursor will generate a few test cases and show you in the chat window.
It's a good idea to check that the test cases satisfy your expectations before writing them to the file. You can also ask it to add new tests, or change/remove any test you want. Once you're happy, ask it to update the file. Cursor will stringify the tests and write it in the scripts.postResponse key.
Accept the changes so that the file is updated.
Running the Tests
Go back to the Requestly Window and open the Create issue request. Navigate to the Scripts> Post-response tab, and you should see the tests Cursor just generated!
Note: If the tests don't appear in Requestly, you may need to reload the workspace.
Click on Send to run the tests. You should see the tests pass. Now you’ve got tests running without writing a single line by hand.
Iterate and Improve
As with any AI tool, Cursor can make mistakes and may generate a faulty test. Additionally, your API can evolve over time, and your tests may need to be updated accordingly. Simply provide feedback to Cursor and ask it to update the tests. For example, in the tests Cursor generated for me, it included checks for various fields, such as id, number, and so on. I can ask it to only check for the title and body fields.
The updated tests now only check for title and body.
Similarly, if the AI generates a faulty output (for example, a malformed JSON), you can prompt it to fix the mistake, and repeat this process until you get what you want.
Scaling and Bulk Automation
You saw how Cursor can help you generate and refine test cases. However, if you have a large number of endpoints, it's not feasible to repeat this process for each endpoint one by one. In this situation, you can use the Cursor CLI to generate tests in bulk. This process has the added benefit that you can integrate with a file watcher, like fswatch, or a Git pre-commit hook to automatically generate tests whenever the request JSON is changed. Let's see this in action.
After installing Cursor CLI, run the follwing command in the project root:
cursor-agent -p "add tests to all the requestly files in requestly-demo/GitHub REST API/. Read instructions from .cursor/rules/test-rules.mdc"
Cursor will now go ahead and add tests to all the requests.
This process has a disadvantage that you won't be able to verify the tests before Cursor writes them to the file. So, be sure to check that the tests run properly before commiting them.
Best Practices for AI-Generated API Testing
AI-assisted testing works best when you set it up for success. Cursor and Requestly make the workflow faster and safer, but the quality of your tests still depends on how you guide and maintain the process. Here are some best practices to keep your testing reliable over time:
Write Clear Prompts for Cursor
Cursor will only be as precise as the instructions you provide. Don’t just say “write a test”. Instead, specify the fields you care about, the types of responses you expect, and whether you want edge or negative cases included. For example:
"Generate tests for this request. Check for a 200 status, verify title and body fields exist, and add one negative test for unauthorized access."
The more context you give, the less cleanup you’ll need later. You can also add repetitive prompts to the project rules so that they're applied every time.
Always Review and Validate AI Output
Even with good prompts, AI may generate redundant, flaky, or irrelevant tests. Treat Cursor’s output as a draft: review it, remove what you don’t need, and refine. Requestly makes this easy since tests are just JSON with embedded scripts - you can see exactly what changed before committing.
Use Requestly Mocks and Intercepts for Safer Testing
Not every test should hit the real API. For failure cases, like 500 errors or rate limits, use Requestly’s mocks and intercepts to simulate responses.
Keep Secrets Local
Never paste API tokens, passwords, or other secrets into Cursor prompts. Store them in Requestly environments and variables instead. This way, you can safely run AI-generated tests without exposing credentials. Use development credentials in the development environment and put the Requestly environments folder in gitignore to prevent accidentally committing them.
Version Control Your Tests
Because Requestly stores collections and tests as JSON files, you can commit them directly to Git. This means your team can review, track history, and collaborate on test changes just as they do with application code. Each AI-generated test case can be peer-reviewed before merging — ensuring quality and accountability as your test suite scales.
With this setup, teams can save a huge chunk of testing time, especially when managing multi-endpoint APIs with frequent updates.
Iterate Instead of One-Shot Generation
Don’t expect perfect tests on the first try. Use Cursor iteratively: generate, review, adjust prompts, regenerate. Over time, you’ll build a solid suite of reusable tests tuned to your API’s quirks.
Combine AI Speed with Human Insight
AI is great at generating lots of cases quickly, but you know your API's edge cases, failure modes, and business logic better than any model. Utilize AI for coverage and scaffolding, then supplement with human-crafted tests for the scenarios that truly matter.
Conclusion
By combining Cursor and Requestly, you can cut down on repetitive test writing while improving API test coverage. Cursor helps you quickly generate comprehensive test cases, while Requestly ensures those tests run securely in a local-first environment.
The result: faster iteration, stronger reliability, and safer execution. If you’re looking to modernize your API testing workflow, give this combination a try - you’ll spend less time writing boilerplate tests and more time building what matters.






Top comments (0)