TLDR; an overview of the basic shopping cart that you can use to explore, automate through the UI or REST API and perform security testing on.
The Shopping Cart Application offers a lot of possibility for testing due to the multi-state and multi-page nature. With the added bonus of a REST API which is documented using OpenAPI spec there are even more ways to interact with it.
Overview Video
The video provides an overview of the shopping cart:
Shopping Carts for Practicing
This is not the first Shopping Cart practice Application.
I know of:
- SwagLabs Saucelabs demo application.
- Toolshop from this has multiple versions
- Restful Booker could be classed as a shopping cart
- and my Basic Shopping Cart
SwagLabs is pure JavaScript so there are no HTTP calls, but you can explore and test the interaction with local browser storage, and the various users that you can login with change the display so you can use it for visual testing and admin access.
Toolshop, Restful Booker and my Shopping Cart all have APIs and the data is cleared regularly. I think Restful Booker clears data every 15 minutes. Shopping Cart clears data on a rolling basis by limiting the amount stored, so if there are a lot of users at any one time (about 200+), you might notice your old orders clearing from the backend.
When I was designing the Basic Shopping Cart I very deliberately chose to make sure that there was no possible way of having any PI data in the system. This meant that the checkout process does not ask for email addresses, credit card details, or even passwords. The users are ‘canned’ with pre-defined passwords. This means you can perform security testing without any issues.
Really you shouldn’t add any PI data to any testing practice site, but when applications use forms, the auto-complete might be switched on, and your details might be auto-populated into the form.
I do like the other Shopping Carts and have used all of them when practicing my testing.
Why A Shopping Cart?
Shopping Cart applications offer a lot of scope.
- There tend to be multiple pages offering navigation scope.
- Pagination if there are enough products.
- API and server side interaction.
- Checkout Process has state - tracked locally via cookies or browser storage, and possibly server side as well. Having state in two places offers scope for state moving out of sync and creating mismatch between server and client.
- Usually a registration or login process. Which also lends itself to security testing.
- Tend to be content rich.
Content rich applications are interesting. The first time we automate them we tend to hard code product names and ids. Then we realise that there is usually a CMS (Content Management System), and users can edit content, which then causes our automated execution to fail.
This might lead us to using the API and UI as a consistency Oracle for content. Does the API serve data consistently. Is the data rendered correctly. So we don’t automate or test exclusively through the UI, we use the UI and API in combination when we want to assert on data conditions.
Creation of the Shopping Cart
To build the application I started API first. The API was completely hand coded. This meant that I could test the flows and the functionality before spending any time on the UI. I was able to experiment with various authentication flows and data representations until I had one that I thought was rich enough and removed any PI risks.
Once I had the backend covered I used OpenCode to incrementally add basic REST API automated coverage, and then expanded the coverage.
Frond End Creation
I wanted the front-end to be simple JavaScript, HTMl and CSS, and avoid the use of frameworks.
My Test Applications are designed to be entry points. They will be challenging enough naturally, so I didn’t want to clutter the learning effort for people practicing by using React.
The Front End was creating by prompting through OpenCode and then I would amend and refactor where necessary. This allowed me to create the front-end quickly. But because I prompt in very small increments the code is more closely aligned with the style of coding I use, and makes it easier for me to amend in the future.
The models used were spread across free models from OpenCode Zen.
- Grok Code Fast 1
Or I used Kat Coder Pro (free) on OpenRouter.
I try to use a mix of models in the CLI, and then a different set of models in the IDE. In the IDE I have Continue configured to use OpenRouter models.
Creating Content
Al the content was generated via AI tooling.
I prompted GTP-OSS-20B via OpenRouter to create 100 ’thing’ name variations. I fed it the sample ’thingy’ words and adjectives. I could have varied this more, and I could easily have used a normal combinatorial algorithm to do this but the AI interface was fast enough.
With a set of 100 names, I then asked for one line descriptions, using the same process. The are all output as JSON to make it easy for me to amend in the code.
Since this data is ‘content’ I can vary it over time. For the first release I just need 100 different products to add to the product list.
Creating Product Images
Generating the images was a little more interesting.
I am deliberately trying to keep my AI cost as low as possible to see how far I can push my use of AI with minimal cost.
My biggest cost on OpenRouter was using Nano Banana to create 2 experimental images when it was first released. The Nano Banana images cost 50c each and they didn’t end up the way I wanted. This was not an experiment I wanted to repeat with the product images. I’d need at least 100 and I might want to regenerate them if they didn’t work out.
First I tried image generation models through the local install of localai.io but I don’t think my machine is powerful enough. The process was slow and the output was basically a random png and completely unsuitable.
Rather than spend another couple of hours experimenting to improve this, I decided to try one of the image models on OpenRouter. These are easy to automate but past experience made me hesitant. However, as I was looking at the costs I noticed that the image pricing was quite cheap for one of the models, but this was only shown in the comparison page.
GPT 5 Image Mini was estimated at $0.002 per K generated.
I didn’t expect to have good results with this but I tried it through a simple prompt and was surprised at the image quality.
It took me about 20 minutes to write code to call the API and experiment with prompts. This led to some good random product images and they were about $0.007 per image, so I could generate 100 images for about $0.70.
I used the product descriptions generated earlier and fed them through into OpenRouter with my script to generate the images you can see in the product pages.
Generating Image Descriptions
Rather than use Lorem Ipsum I wanted to generate fanciful and long descriptions for each of the products.
I wanted to use AI to generate a fuller paragraph style description of each product.
I found that the lava:7b model will run nicely on Ollama locally and it is very small. It has vision capabilities, so it can read images.
https://ollama.com/library/llava:7b
It only consumes about 5gb when running so it fits comfortably in my GPU memory.
I used this via api calls to describe each of the images in turn. I was very surprised at the quality of the descriptions when running locally.
This worked out well enough for my experiment and in total cost around $0.70 to generate and describe 100 images.
I thought that 100 was enough to allow pagination experiments and build on over time.
In total to do all this programmatically took about an hour to write code and 4 hours to run.
I think it would probably have taken longer for me to search the web, find a tool to do all this, if a single tool even existed at all.
Tool Vendors Take Note: it is becoming easier and easier for people to create their own tooling, cheaply and with the exact functionality they need.
Automating the Web UI
I used OpenCode to generate the Page Object Models and an initial set of coverage, I then amended and refactored to expand the coverage as I required.
This has become my normal approach to generating automated execution code now. This is also different from the way I code manually.
When coding manually:
- I write the test first,
- and grow the Page Object Model organically through refactoring,
- this way I only create the abstraction code that I need in the test.
When coding with AI, I reverse this:
- I generate the Page Object
- I give OpenCode access to the Chrome Dev Tools MCP so that it can read the DOM and it generates accurate locators. Sometimes I refine them, but they are usually good enough.
- During the Page Object creation, in the same prompt I ask for
- “A set of basic tests which use the Page Object Model methods to cover the basic functionality. Do not test the Page Object, use the Page Object to avoid putting any locators for ‘driver.’ methods in the test code.”.
- I then review and refactor.
- Then I iteratively expand with small prompts, targeting specific coverage with each prompt.
- I’ve found this works well, avoids rework and is fast.
Automating the REST API
I prompted OpenCode to read the routing code that I had written in Java for the API and create basic coverage for the each of the API end points in turn. This led to the creation of the payload objects and incrementally build up an API abstraction.
By targeting each endpoint individually I could gradually grow the code, keep control of it, make sure it was using my style of coding and automating and didn’t need a lot of refactoring later.
I then prompted to recreate the UI automated execution flows using only the API abstractions that I had just built up.
Testing
The general rule we are told is that you test first, then automate. You can see from the above description that I do not abide by this rule.
I used the automated coverage to handle the basic flows and flush out any simple errors. This creates an automated safety net that can help me when I’m fixing any issues that I find from testing. Without the basic automated coverage I might introduce more errors when fixing bugs than I remove.
Testing was performed via exploration using Chrome, heavily reliant on the dev tools and viewing requests in BurpSuite.
I then moved on to testing the REST API directly.
This helps me build up a list of issues, some of which I fix, but since it is a testing app, some of them I don’t.
The automated coverage tells me that it can work. My testing convinces me where it fails to work.
Security Testing
I coded in some security issues, so I made sure to test these.
I used Bruno as my API client to explore all these issues.
Some of them I removed because they would be annoying to the user if exploited when they were trying to test the application, so they are primarily information leakage rather than amendment. But… it might be possible if you test deep enough.
The only tool used during this process was Bruno to issue requests.
Performance Testing
I didn’t want to go live with the cart unless I could convince myself that it could handle multiple users.
I used OpenCode to convert my API automated execution scenarios into K6 scripted scenarios such that I could scale Virtual Users and hit the application functionality.
I do not recommend the Shopping Cart as a target for performance testing, that isn’t what it is designed for.
Butch Mayhew maintains a list of sites designed for practicing Performance testing against in his Awesome Sites To Test On
Two from Blazemeter and one from OctoPerf:
This took me about an hour in total, because I had to fix a couple of errors in the generated k6 script, but I think using OpenCode saved me a couple of hours since I’m not a K6 expert and have to re-learn it every time I use it.
Join our Patreon from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.
Top comments (0)