Previously I had to wait about 3 minutes till my tests finish, now the same suite clocks under 20 seconds. Let's see what the bottleneck was and how I vibe coded the re-design. Spoiler: It was painful running in circles and despite AI gave me some valuable insights, I had to be the Deus ex Machina (pun intended).
I guess I don't have to explain the importance of writing automated tests to protect applications from introducing unnecessary errors. It is never 100%, but as the time flows, test suites grow larger, cover more situations and especially prevent bugs from re-appearing. I would say it is rather hard to develop a habit of writing tests regularly, especially in personal projects, but current tooling focus on making it easy. Setting up Vitest to test Nuxt applications is quick and you're good to go.
You can develop a new feature, write new tests for it, and run the suite to check if:
- New tests are passing - so the new feature works as intended
- Old tests are still passing - so you didn't accidentally break any seemingly unrelated things
Then you can commit with greater confidence.
But there is a catch - more tests run longer and former seconds may prolong to minutes. So you wait minutes and then you come back and see the test suite failed. You fix the error, run the tests again...and wait minutes again. Do this couple of times and you suddenly don't like the whole idea that much.
Framing the problem
Test suite for my Nuxt module for Neon database connection reached 180+ seconds of runtime. This didn't feel right anymore. The tests were testing, but the inefficiency hit my DevEx hard. It was the turning point when premature optimization turned into a required one.
Because it is 2025, I invited AI to check the situation and brainstorm for possible solutions.
Copilot quickly pointed out, that I am using separate Nuxt app instance for each test file. At first it looked like a good decision to separate concerns - testing SELECT operations in an app for selects, INSERT in another one. It should help keeping things small and organized. But booting them up takes time and all tests had to wait until their app is rendered.
To make the problem worse, I decided to run my tests sequentially. Because if something messes up and the module cannot even connect to the database, why tests the functions? Or if SELECTs are failing due to an error in constructing the SQL query, why to try INSERTs that would also inevitably fail? This stacks the waiting times up. And despite everything eventually works, I ended up waiting those 3 minutes.
First iteration - double speed
The fix was apparent - merge the separate test apps into one. This was quite easy to do, because I designed my tests so that each has its dedicated Nuxt page (this means separate URL) where it connects, perform SQL action bound to a button and check the resulting HTML for expected values. So I really just have to squash multiple /app/pages directories into one and make some small adjustments in app.vue file.
But it wasn't enough. Because my E2E test files would still
await setup({
rootDir: fileURLToPath(new URL('./neon-test-app', import.meta.url)),
})
via a function from @nuxt/test-utils/e2e package. This would still create and bind new instance of Nuxt app for each test file. So it would still take a lot of time.
To be honest, there was already an observable speed-up. I guess it was because it can re-use some cached pieces and mounting the same app 5 times was about twice as fast as mounting 5 different apps, but it was still very sub-optimal.
What I really needed was that whenever any test case does
const page = await createPage()
it should connect to the same instance of underlying demo application. This would only start once and then all tests should call it with a snap.
Second iteration - 10x speed, but...
So how to do that? I didn't know. So I asked my smarter/dumber electronic assistant. It came up with very unorthodox solution. If you want to follow along, the result was committed HERE. I will now sum up the most important pieces.
In vitest.config.ts, new setting was added:
globalSetup: ['./node_modules/@nuxt/test-utils/dist/runtime/global-setup.mjs'],
It took me a while to understand what it is even intended to do. I challenged my Copilot's reasoning and cross-checked with my standalone ChatGPT 5.2 Plus. It confirmed the idea. Despite it testified it didn't find any article or discussion about it, it says it assumed from the code, it would work. The justification made sense. I had nothing to lose. So I believed.
In short, running this file in globalSetup before other Vitest actions start should "magically" grant me existence of a mounted app in emulated browser. To tell it, which app I want to mount, I should feed NUXT_TEST_OPTIONS environment variables which are swallowed during the process.
const rootDir = resolve(fileURLToPath(
new URL('.', import.meta.url)), 'test/neon-test-app')
// Used by @nuxt/test-utils/runtime/global-setup
process.env.NUXT_TEST_OPTIONS = JSON.stringify({
// path to neon-test-app
rootDir,
// don't create a Playwright browser in globalSetup
browser: false,
})
The second integral part was e2e.setup.ts - a file to add actions to be run by Vitest before every test file. In beforeAll method it should construct a virtual browser instance - but only for the first time when it is non-existent:
const ctx = useTestContext()
if (!ctx.browser) {
await createBrowser()
}
And just like that, this would grant all test files access to the shared context with prepared virtual browser with mounted Nuxt test app. And all tests should just:
const page = await createPage()
and start navigating to desired routes and testing the stuff.
I tried to follow my usual sceptical approach to AI and understand the intents behind the code spitted out - at least briefly. I couldn't grasp all details, but overall it appeared to make sense. And when I tried it - it worked!
Well, not for the first time. We had to tweak couple of things, but in the end the suite was running. And boy, it was fast! Less than 20 seconds and all tests already passed. If I deliberately made a test fail to check, whether it actually runs and tests something, it started failing just as expected. My job here was done. Commit, close the issue, go to sleep with good feeling.
Except there was a "tiny" detail I missed at the time. The Vitest suite was running fine, but once the process is finished, the output just disappeared from the terminal! So there was no chance to recover the result. Unless you looked at it, you had no chance to tell whether it passed or not. And if it failed, there was no way to see what exactly went wrong.
My tests essentially become useless 😨
Third iteration - to hell and back
The reason why I missed it when I worked on it is interesting per se.
Because we tried a number of things with my Copilot during the process, at some time we introduced $env:DEBUG='@nuxt/test-utils*' setting into my current terminal. Thanks to this extra environment setting, the program started to log differently - extensively and without interruption. All the log messages were prefixed with [source] tags, but I barely noticed that. I saw the Vitest output and I was happy.
But the default way of things with the globalSetup is, that the terminal is first occupied with the test Nuxt app build and then it is automatically switched into a new alternate screen that is visible in the terminal, but only as long as the associated process is running. Then it is just thrown away and replaced by former (empty) terminal.
I didn't know that, until I spend HOURS of useless attempts of adding config here and there. AI was spitting out fabricated theories, and I was struggling with implementing them and keep failing again and again.
After some time we at least isolate the DEBUG option as a possible workaround. But it also came with a very noisy and verbose output. Filtering out was possible, but platform-dependant. Or I could have wait for 20 seconds of silence and then got the filtered result processed by Node. Also not good.
I tried to bargain with AI until I finally - for the first time I thing - managed to force ChatGPT into saying: "No, this is not possible." More precisely, it wrote: "This is the moment where I need to be very explicit and honest, because you’ve now hit a hard boundary, not a missing trick." which I find hilarious now, but I didn't laugh back then.
The harsh truth hitting me that evening. I wasted hours chasing my own shadow. With AI happily assisting me and encouraging me to continue.
I have fast, but badly observable tests. Either no output at all, or an output littered by tons of non-related debug messages, or platform-dependant script commands to polish it.
Fourth iteration - deus ex homine
In my last article I went to sleep on the problem, and it didn't help. But this time it does. When I woke up this morning, prepared to finalize the ugly but so-so working solution, I suddenly got an out-of-the-box idea.
What if I stop trying to abuse Vitest into doing something it is not designed for?
What if I just aggregate my test files into one big suite?
Like that it would only need to build and mount the test app once and then it will run fast. Yes, there would be a little trade-off - one big test file with a lot of test cases. But to be honest, this doesn't really matter from the runtime perspective. And I can still keep the source code isolated by having separate definition files that will be dynamically imported in the actual single e2e.test.ts file, the only one that will be executed by Vitest.
And so just like that, I throw away most of the "clever hacks" from the solution #3 and came up with something else. Despite my AI assistants failed yesterday, I indeed used them again to help me fabricated the vivid solution. Now when I was back with a clear intention in mind, I got helpful output and tips again. The final solution is HERE.
Calls for globalSetup and even the setupFiles were removed from vitest.config.ts. The all I need now is a nice, small and clean e2e.test.ts file:
import { fileURLToPath } from 'node:url'
import { setup } from '@nuxt/test-utils/e2e'
// only setup nuxt-test-app ONCE
await setup({
rootDir: fileURLToPath(new URL('../neon-test-app', import.meta.url)),
// Playwright browser is not required for now
browser: false,
})
// import and run E2E test suites AFTER the test app is ready
await import('../neon-test-suites/01-basic')
await import('../neon-test-suites/02-select')
await import('../neon-test-suites/03-insert')
await import('../neon-test-suites/04-update')
await import('../neon-test-suites/05-delete')
It prepares the Nuxt test app via dedicated setup function and then just awaits and executes one file with test definitions after another.
And that's it. No hacks, no mess in console. It works like charm and it is still fast. 20 seconds and you're done. The output remains in the console.
To infinity and beyond
So now you know how I wrestle with Vitest with Nuxt and won. If you'd like to know more details about the final implementation or you have objections and ideas for improvement, let me know in the comments.
You also seen how AI tools still lead you astray easily. To their defence, in this case both Copilot and ChatGPT pretty much resembled a common developer, who is desperately throwing solutions that "should work" without realizing the whole picture. Where they failed and I eventually succeeded was ability to step back and re-think the whole situation. I believe this is still the gap between human developers and the artificial pseudo-intelligence.
Lastly, this wasn't meant to be an anti-AI rant. It is helping me on daily basis, and I really enjoy getting further and further with its assistance. I just think there are still limits we need to be aware of. It was another nice lesson for me, and I hope you found it somewhat interesting too.
Looking forward to your feedback and questions.


Top comments (0)