PART 4
Here is the Github source code repo for this project: https://github.com/joegiglio/cursor2-kb
It’s been a while…
Last time we spoke, Cursor seemed to be breaking down and the index of the file structure had become corrupted. After waiting 23 days for a fix, I am happy to report that the problem does finally seem to be addressed! The UI is still quirky and it seems to slow down with some type of memory leak after a while which can be fixed with a restart. But at least now it is making changes to the correct files and I am not faced with a mysteriously corrupted file index.
While waiting for a fix, I noticed that many Cursor users were complaining about the instability of Cursor and/or Claude. Yes, it works sometimes and it is impressive when it does but it is a black box of moving parts, constant updates behind the scenes and inconsistent results.
I decided to put the Help Desk project to the side and wanted to see if I can build a Knowledge Base application instead. This might be a little simpler - no need for real time updates or complex integration points.
I used the same technology stack that I am most familiar with - Python, Flask and Bootstrap with a SQLite database. Everything was coming along swimmingly until I hit a snag with “database migrations”. Similar to the previous chapter, it generated constraint code that SQLite did not like and it threw an error:
Exception during DB migration:
raise ValueError("Constraint must have a name")
ValueError: Constraint must have a name
That should have been easy enough for Claude to fix but it started spiraling and I ended up rebuilding my database from scratch. I will blame this on my inexperience with debugging flask-migrate but Claude was not able to clean up the mess without just wiping the database and starting over.
After things settled down, I ended up with a mostly functional knowledge base system, along with an Admin panel, dark / light mode, search (just using the SQL queries) and test data generator. It was built with Quill which is an open source text editor with support for formatting and images.
I say “mostly functional” because I purposely did not complicate it with authentication or mutli-tenancy. I kept the architecture simple with basic Flask routes and CRUD operations. No API yet… Perhaps I will revisit these features in phase 2.
As I was adding features, I kept noticing that Claude would fix one thing but break another - just like a real developer. Also, there was CSS and Javascript scattered all over the code. Sometimes, adding new articles would work but editing would not work and vice versa. Sometimes article formatting code would appear here and there and I had to ask Claude to clean it up. It was becoming whack-a-mole.
The project was getting complex enough that it needed test automation which was briefly discussed in the last chapter. In most companies, there is very little budget for QE and most Developers hate writing tests, so maybe AI can help?
I have a respectable amount of experience with Selenium and Cucumber but it has been a while. Playwright has been gaining traction and supposedly has better handling for locators and retries so I asked Claude to create some Playwright tests. The VS-Code Playwright extensions all seem to assume you are using Javascript and they do not have direct support for Python so I will need to run the tests from the Terminal, like an animal.
[PROMPT]
I would like to add end to end browser tests to this project using Playwright and Python. I would like the tests to use fixtures to seed the necessary data and then delete the data at the end of each test run. The tests need to clean up after themselves and not leave test data behind. If you can configure it to setup a test database while running the tests that is even better. Please let me know which libraries to install, create tests for the top 10 scenarios by looking at the main features of the product and let me know how to run these tests from within Cursor.
[/PROMPT]
The first attempt was pretty good, providing instructions on how to install the necessary libraries. The file structure for the tests also seemed accurate and it even added a test for responsiveness by changing the viewport size. Nice touch… But I did not like the initial set of tests it generated so I asked it for further refinement:
[PROMPT]
Authentication has not been implemented in this product yet. Please do not create tests for those features. Instead, focus on Admin functionality like creating, editing and deleting topics; adding, editing and deleting articles. Also, focus on end user functionality such as searching and viewing articles. Look at the code to figure out the top 10 features of the product and create tests for them.
[/PROMPT]
The next batch of tests looked a little better but still no Admin functionality.
[PROMPT]
Most of those tests look pretty good but I am not seeing anything for Admin functionality in the @admin.html file. We need to have tests for adding, editing, deleting topics and adding, editing and deleting articles.
[/PROMPT]
Now it looks like it got it right and created code to test the following app scenarios:
These look like great starting points. Now let’s see if the Claude's execution instructions are correct. Well, unfortunately not…
[PROMPT]
I followed your instructions for running the tests and I am running into a couple of problems.
I opened a new terminal and ran the commands “FLASK_ENV=testing flask run” followed by “pytest tests/e2e/”. I see the following problems:
- When I navigate to the home page, I am seeing the same data I was seeing before. It did not seem to switch to the test database.
- An exception: ImportError while loading conftest '/Users/joe/cursor2-kb/tests/e2e/conftest.py'. tests/e2e/conftest.py:12: in from models import Topic, Article ModuleNotFoundError: No module named 'models'
[/PROMPT]
Claude created a models.py file, duplicating the code in the app.py file and I questioned why…
The initial code for this application was all done in the app.py file, including the models. This works fine but can leave you with an unwieldy single file application. Best practice says to separate the models and routes from the config and this is where Claude was taking me, even though it took a long winded route:
[PROMPT]
The models.py file you are proposing looks like it is duplicating code from app.py. Would it make more sense to use the models in app.py instead?
[/PROMPT]
[RESPONSE]
Actually, it's the other way around - we should move the models out of app.py into models.py to follow the separation of concerns principle. This is a common pattern in Flask applications where:
app.py handles application configuration and initialization
models.py contains all database models
routes are often in separate blueprint files
[/RESPONSE]
Fair enough. Let’s go with it!
Well, not so fast. Upon closer inspection…
[PROMPT]
I like where you are going with separating the models.py file but you also deleted all the routes from the app.py file. I do not see them in any other file. They are just gone…
[/PROMPT]
Now Claude pulled all the routes out of the app.py file and moved them into a separate file for blueprints. It is making way too many architectural changes and I want it to revert… Beware the brilliant new developer that wants to rewrite everything! Although the suggested changes are considered best practices for a professional Flask codebase, I will not trust those changes until the tests are in place so we can verify they are working properly.
[PROMPT]
I understand that using blueprints is the preferred method for professional level Flask code. However, I am not comfortable making such refactors right now. I am first trying to get end to end tests in place. Then we can look at the project structure. Please do not move the routes out of the app.py file.
[/PROMPT]
Cursor attempted to revert but the merge UI was a sea of green and red and I was not comfortable with the change. New strategy: Let’s get some simple Playwright tests in place and then work on incrementally improving them. Maybe this will be better…
I stashed the proposed test list since I liked where that was going and then closed out of the Cursor Composer window and started a new Composer which I believe removes all content of the previous Composer.
Let’s try again with a simpler starting point. Iterative development is usually best anyway.
[PROMPT]
I would like to add end to end browser tests to this project using Playwright and Python. Please let me know which libraries to install, create tests for the top 10 scenarios by looking at the main features of the product and let me know how to run these tests from within Cursor.
[/PROMPT]
We had similar problems this time with Claude wanting to test functionality that does not yet exist. Time to scold Claude!
[PROMPT]
I see you are creating tests for authentication functionality. Although we have a login page, it is only a placeholder and does not actually function. It does not even have fields to allow anyone to login. Please review the project code again and come up with the top 10 test scenarios for Playwright.
[/PROMPT]
This caused Claude to apologize and create a bunch of test scenarios all around the Search functionality on the homepage. I was hoping not to hand feed the tests I wanted, but instead wanted it to find the features by looking at the code. No luck though, so I went back with:
[PROMPT]
This looks better but is focused too much on search. We need tests for the following features: search, create topic, edit topic, delete topics, create article, edit article, delete article, view article and some responsive tests to make sure everything looks OK on different screen resolutions.
[/PROMPT]
We are starting to spiral already… It created the basic framework for some more tests but it is making too many assumptions about data that will exist. Make no assumptions. This is why it is best to have a test database that is pre-seeded but we will try to get there eventually. For now…
[PROMPT]
Some of these tests make assumptions that certain links will be valid and certain content will exist. This may not be the case if articles or topics are deleted. Please make each test self contained by creating the data it needs and then deleting it when the test completes.
[/PROMPT]
So now Claude is right back to creating fixtures anyway! I guess there is no way around it if you want to do it right…
Another problem is using test strings like “Test Subject” and “Test Article” which may clash if we run parallel tests which is a common approach to speed up execution. I wanted the tests to use randomized data so I prompted with this:
[PROMPT]
Please use random data for the topic name, article name and article content. It would also be a good idea to add a test for foreign language characters to make sure they appear OK during browsing and searching.
[/PROMPT]
I have a feeling we are going to need a lot more tweaks but this is looking like a very reasonable start. Claude returned the following:
These changes were made:
Use Faker to generate random data in multiple languages
Add specific tests for multilingual content
Verify proper display of various character sets
Test search functionality with non-Latin characters
Verify no character encoding issues in display
Test content creation in multiple languages
The tests now verify that your application correctly handles:
Japanese (日本語)
Chinese (中文)
Korean (한국어)
Russian (Русский)
And regular English content
Each test creates its own random data and cleans up after itself, making the tests more reliable and independent.
These are the tests that were created:
Now to run them. First, I need to install the faker library which Claude instructed me to do…
I was given a bunch of different flags and options for running the tests. From previous experience, all of these instructions look right but I am going to use the pytest.ini file Claude created and pass in a few flags for verbose output, screenshots and a slowed down test execution so I can follow along visually…
And now to run pytest tests/e2e/ to actually run the tests!
All of the tests failed with this same error:
playwright._impl._errors.Error: Protocol error (Page.navigate): Cannot navigate to invalid URL
In pytest.ini, the base_url is set to http://localhost:5000 which looks right to me.
I couldn’t figure out why it was failing so I asked Claude. He made a few small tweaks and then suggested I move the pytest.ini file to the ROOT directory of the project and out of the tests/e2e directory where he originally placed it. I pushed back on this and he said it was fine where it is. OK then…
He also suggested I use absolute URLs in the tests instead of the URL set in the INI file and text fixtures. Wrong. I also pushed back on that and he agreed and fixed it. Losing faith here…
With these changes, I ran the tests again and now I see the browser open up and go to the base URL which is the homepage of the knowledge base but all tests are still failing. I went a few more rounds with Claude and it looks like the root page issues may be fixed but now we are failing with:
FAILED tests/e2e/test_admin.py::test_create_topic[chromium] - playwright._impl._errors.TimeoutError: Timeout 30000ms exceeded.
Which looks like more of a locator issue.
[PROMPT]
Now all the admin tests in test_admin.py are failing with this error: admin_page.fill("#topic-title", data["title"]). That is not the correct locator. Please fix.
[/PROMPT]
Claude fixed one test but the others are still failing with locator errors. I got more explicit:
[PROMPT]
Other than test_create_topic, the rest of the tests in @test_admin.py fail with locator errors. Please review all the locators in the tests and correct them.
[/PROMPT]
Claude made a few changes and now we have another locator issue…
[PROMPT]
Now all tests are failing with:
tests/e2e/test_admin.py::test_create_topic[chromium] - AssertionError: Locator expected to be visible
[/PROMPT]
It is obvious we will be here for a while, going round and round while Claude makes the necessary fixes. What is also obvious and unfortunate is that Claude does not have the proper context of the application and page elements without me having to be very explicit. Just asking Claude to “look at the code and create tests” is not good enough. You will need to be very familiar with feature functionality and page structure so you can nurse it along.
Claude was helpful in setting up the architecture of tests with pytest.ini and conftest.py but even so, it needed several tweaks for a basic test to run without an immediate failure.
This project uses Bootstrap on the front end along with Javascript confirmation and modal dialogs. Claude seems to have a very difficult time with these if there are multiple modals in the source code, where only one is visible, which is a common design pattern. After several iterations and guiding it with pseudo-code, we eventually came to an understanding and a passing test.
Throughout these tests, Claude made many assumptions about the names of locators, most of which did not exist anywhere in the source code. Only after explicitly telling it to look in certain files was it able to clean up but why make assumptions? Perhaps I did not feed it the correct files to begin with? I am still not sure how all of that is supposed to work.
Sometimes it seems to have access to the entire project while other times I need to be much more explicit. If it needs access to a file it should ask and not just create bullshit code. Remember, each back and forth costs you API credits.
Besides lots of test failures, I am also having issues with screenshots not being saved even though I am passing the flag in the pytest.ini file. Claude offered me a few permutations but none of them fixed the problem. I looked at the Playwright documentation and still no resolution. This seems like another one of those issues that “should be easy to fix” but you can easily burn a dozen Claude requests and get nowhere quickly. I will move on for now, using a workaround that Claude suggested.
After spending WAY TOO much time making tweaks with Claude, I now have only two tests that are passing and a third that starts out as PASS but ultimately ends up as FAIL due to some conflict between the test code and fixture code. It is becoming exhausting to debug and I feel like I am having circular conversations. I have gotten to a point where I just blindly merging suggested code changes and hoping for a miracle.
Earlier in this chapter, my intention was to build a test harness that seeded and removed test data along with reusable fixtures and functions to keep the data and test code clean. Those intentions are long out the window and now I just want the tests to pass and I will try to clean them up later. Let me know if any of that sounds familiar…
Even something seemingly as simple as maximizing the browser window when launched is proving to be way too difficult. Claude has made attempts with Python and Javascript and it just doesn’t work. He doesn’t know why and the geniuses on Stack Overflow have answered that question several times and their suggestions do not work either. My guess is there is some code conflict among the files but I don’t know the code well enough to figure it out right now. Where is AI when you need it!?
I found that some of the generated test code had more than one expect/assert. I believe it to be best practice to have only ONE assert per test but know that is not always a realistic case especially when the tests are new and you don’t trust them yet.
I found myself being a “prompt engineer” trying to figure out how to nudge things in the right direction. Since I did not write any of the product code myself I was not familiar with its inner workings and was hoping Claude could pick it up and run with it. After all, he wrote it but he seems to have a short memory. He gives you enough to get going but certainly not enough to ask once and collect the money.
What would make life easier is to have APIs to call for CRUD functions that create and delete data, leaving the end to end tests to only test their intended functionality. The shortcomings and missing features of this test project may sound all too familiar if you have ever written automated tests for a software application. You often end up having to work around product shortcomings before you actually get around to testing product functionality. You may recall way back in the first chapter that I intended to build the product API-First and that quickly spiraled out of control. Absent of APIs, perhaps direct SQL calls would have made more sense for setup and teardown.
If you have automated tests working in another project and are familiar with the code and page structure of a new project, copy/paste/tweak will likely get you further faster than the Claude Playwright approach outlined here. At least then you will have some familiarity with the code and can tweak if with some confidence. As of now (November, 2024), I find asking Claude to generate end to end tests with Playwright to be a painful exercise.
ABOUT THE AUTHOR
Joe Giglio is a 25+ year veteran of the software industry. In his career, he has worked for a local dial-up ISP, startups of all sizes and a well established, slow moving enterprise. There were important lessons learned along every stop.
His recent projects include:
Remote work training and consulting: chiefremoteofficer.com
Kindle book "Making Remote Work, Work For You"
Remote company rankings. Think "Glassdoor for remote companies": remotescorecard.com
Joe’s guiding principles are: Remote first, quality driven, customer champion and lifelong learning. He was an introvert and practiced social distancing long before it was cool!
He currently resides in North Carolina and is always interested in new opportunities, meeting quality people and learning about exciting projects.
Contact Joe Giglio: joe -AT- joegiglio.org | Twitter | LinkedIn
Top comments (1)
Getting the right online credit company to repair your credit can be a bit difficult because there are a lot of amateurs posing as experts . When I was in need of a loan to purchase a truck for my new business , I couldnt get approved for any type of loan due to my poor credit score . I decided to contact an online credit repair expert to fix my credit . At first I came across some amateurs that took thousands of dollars with nothing to show for it, I finally contacted the right expert who is GOLDCREDITfix @ gmail . Com and he came through for an affordable price . I was 100% satisfied with his service .