👋 The Backstory
I want to share a recent "experimental project" of mine: HardwareTest.org.
The motivation was simple: I bought some new peripherals and wanted to test them. But I was fed up with the existing tools—screens full of ads, outdated UIs, or sketchy .exe files that I didn't want to download.
As a self-described "average developer," I recently got brainwashed by the concept of "Vibe Coding" (coding by natural language/AI intuition). I thought, "AI is so strong now. I'll just write the prompts, let the AI write the code, and I'll be done in minutes, right?"
Spoiler Alert: I was too naive. 😂
While AI absolutely lowered the barrier to entry and boosted my speed by 10x, taking a tool from "it works" to "it feels good to use" was full of hidden traps.
🚧 The Real Challenges
Here is a breakdown of the actual struggles I faced while pair-programming with AI:
The Tooling Chaos
My workflow was a bit of a mess. I started with Antigravity (it designed the initial UI), but ran out of credits. I switched to Codex to finish the logic. For the blog content, I used Gemini, but integrating that content back into the project via Codex resulted in a formatting nightmare. It was a lot of back-and-forth "fixing" what the AI broke.Browser Limitations vs. Physics (The Keyboard Test)
I thought testing Keyboard Polling Rate would be simple: just tell the AI to "write an event listener."
The Reality: I discovered that the browser's Event Loop often can't even keep up with a 1000Hz gaming keyboard. The raw data coming out was jittery and unusable. The Fix: I was forced into dozens of rounds of conversation with the AI. We had to optimize the algorithm, add debounce logic, and implement sliding averages just to get a relatively accurate "Real-time Hz Dashboard" on the web.
The Devil is in the Details (The Mouse Test)
I assumed a mouse test was just listening for onClick. The Reality: To properly test for Double Click issues (a gamer's nightmare) and Scroll Wheel rollback, you need very precise counting logic. Also, the AI kept confusing "Middle Click" (pressing the wheel) with "Scrolling" (spinning the wheel). It took a lot of human intervention to separate those events cleanly.The SEO Battle
Writing the code was just step one. To get this English-language site indexed by Google, I spent ages wrestling with Schema, FAQ, and JSON-LD. The Insight: AI writes syntactically correct code, but often logically nonsensical SEO tags. This led to Google Search Console errors that I had to manually debug and patch.
✨ The Result
Despite the process being more twisted than I expected, I'm actually really proud of the final result. It is a pure static, ad-free, dark-mode online hardware diagnostic suite.
👉 Check it out here: www.hardwaretest.org
Current Features:
⌨️ Keyboard Test: Visualizer with a real-time Hz polling rate dashboard (and Ghosting/NKRO support).
🖱️ Mouse Test: Left/Right/Middle buttons + Scroll Wheel + Double Click detection.
🖥️ Dead Pixel & Fixer: Standard color cycle test, plus a "High-Frequency Noise Repair" feature built with Canvas.
🎧 Audio Test: Left/Right channel separation + Logarithmic Sweep.
This "Vibe Coding" experience taught me a valuable lesson: AI is an incredibly fast junior developer. It can speed up production by 1000%, but it cannot yet replace the human eye for product details, edge cases, and user experience.
🙏 Feedback Welcome! The site just went live, so there are definitely bugs and rough edges. If you have a moment to try it out, I’d love to hear your feedback in the comments!
Top comments (12)
This is so true because AI does not have the capability to think and and give you a 100% fully functioning code. If you do not have some sort of knowledge yourself, it would blindly lead you into creating something that simply doesn't work (the way you would want it to). Good read 👍
Exactly! It feels like working with a super-fast junior developer who is extremely confident but often wrong. 😂
If I didn't have a basic understanding of how the browser Event Loop or DOM listeners work, I probably would have given up when the keyboard test failed initially. You definitely need to be the "pilot" to verify where the AI is taking you. Thanks for reading!
Vibe Coding is like a slot machine; it consumes a lot of Tokens and you might win, or you might not.
The biggest drawback of generative AI is that it guesses the most suitable words from a database - it can't think, which makes its answers to very complex questions seem illogical and limited. Therefore, AI is far from perfect.
That explains the "hallucinations" I encountered perfectly.
Like you said, it was just guessing the most suitable syntax patterns for the SEO schema. It looked like valid code because the words were right, but the underlying logic was completely broken because it couldn't "think" through the structure. It’s a probabilistic engine, not a reasoning one.
When someone says "vibe coding", they mean the new agentic type not that one in which you would ask the AI and copy the code and paste manually and then continue the chat in the same way and what I experienced in the new agentic type is that it is so bad than the type described above I mean they make so much errors and often superficial comments and in reality not in the same way while the manual method is much powerful than the agentic one even though it would slightly cost more time but still better. Did you experience this also?
100% experienced this. You hit the nail on the head.
I found that the "manual" copy-paste method acts as a necessary Human Code Review layer. When I let the "agentic" features (like auto-apply in IDEs) take over, they often introduced subtle regressions or broke things elsewhere that I didn't notice immediately.
Copy-pasting forces me to read and understand the logic before committing it, which saved me multiple times when the AI tried to use non-existent APIs for the keyboard test. The "extra time" is actually "safety time."
The Agentic AI is built for tasks like web development, mainly HTML, CSS, and JavaScript. If you go beyond that, such as using a different language or working on a complex project, the AI will first tell you it’s not feasible. And even if it does try to implement it, the result will have thousands of errors, to the point where you’ll wish you had written the code yourself. At that stage, you realize it’s easier to code it on your own than to fight the AI through hell especially the Agentic AI and the manual method is a bit powerful because you control and ask what to add or to code next etc.
Easier to code it on your own than to fight the AI" — I felt that in my soul. 💀
Even within JavaScript (which it's supposed to be good at), as soon as I stepped outside standard DOM manipulation and tried to implement the 1000Hz polling algorithm, the Agentic AI completely lost the plot.
It kept aggressively applying "fixes" that introduced race conditions I didn't have before. I eventually had to stop the agent, revert the files, and manually guide it logic-block by logic-block. The "manual method" is basically damage control
For me, AI is a tool, and also an assistant, a teacher, sometimes even a mentor.
It’s not there to replace us or do the work for us.
When you learn how to use it well, it removes friction, sharpens your thinking, and helps you move much faster.
In that context, yes, you can easily go ten times quicker, but not because it thinks better than we do, but because it expands what we can do.
I love the "teacher" perspective. Ironically, it taught me the most when it got things wrong (like the Polling Rate logic).
By giving me a solution that almost worked, it forced me to dig into the browser's Event Loop documentation to understand why it failed. So in a way, it did sharpen my thinking, just not by giving me the answer, but by pointing me in a direction to explore.
Great!