DEV Community

Cover image for Fear of AI made me create my own product
Maksim Dolgih
Maksim Dolgih

Posted on • Originally published at Medium

Fear of AI made me create my own product

After losing my job, I encountered problems and questions that had always been there, but I hadn't paid much attention to them. But that's what defined the new me.

Where should I begin my story…

I am a front-end developer with eight years of experience. Since September 2025, I have been looking for a job. And no, this is not another article about broken hiring or something else. This is my story of how losing meaning in the moment helps me understand and reflect on my entire professional experience. And that, sooner or later, everything I have done will mean nothing.

After losing my job through no fault of my own, I was confronted with problems and questions that had always been there, but which I had never paid much attention to:

  • Apathy and the trap of being a developer: "Why learn anything new about technology if AI will be able to do it in a month anyway?" "What's the point of learning if a library or framework only lasts 2–3 years on average?"
  • With the realization that after 2020, there is no such thing as "stability." But what can I do if I'm not a developer? Where is my concept of "anti-fragility"?
  • All my skills in processes and automation apply only once per team/employer. Once done, auto-process works without any problems. My skills served as a means to smoothly replace people, whether they were resources or not.

And the conclusion was simple: "I am a capable performer and creator, but I am still dependent on others."

I am a product of the current education system, where my skills are needed by those who are changing the world, but I myself am not.

The more I read about AI, the more useless I felt

We all live in the same information field. Yes, the availability of Claude Opus 4.5 or other models has gradually and slowly made my experience and learning from mistakes a waste of time in the past. Everything I used to do for each project is now sometimes just "prompt + generation."

I built processes where "SOLID" and "DRY" concepts were important. I built Zero Error Policy development and secure releases. I was meticulous about every possible bug, and I fiercely argued that writing strict and correct code was essential. But none of this fits under the concept of "business development." In fact, all my processes slow down feature delivery, but I forced people to write code for people.

Business thinks about how to make money now, while developers think about how not to spend money later.

This phrase can be used to describe any development, and every debate about what is more important: "refactoring" or "quick results."

AI has changed the code development paradigm

In the past, people wrote code for people, so it was important to follow development principles. It was important to understand the context of the code over time and to ease the cognitive load on the developer in the future.

With AI, there are no such problems. It can easily find bugs in the most obscure code, rewrite it, or adapt it to a new library or architecture. Tech debt has become conditionally free.

Everything I tried to prevent became unimportant. Because now you can do this:

  1. Write preliminary tests and benchmarks for the application
  2. Write down the instructions for the result
  3. Run the model for 5–6 hours
  4. ???
  5. Profit

The only thing that concerns you now is how to optimize tokens and avoid quotas.

Now, it doesn't matter how the code is written or by whom, because it can always be changed if you know how to verify it. Previously, a business couldn't grow without a development team, but now a business can consist of just one person with a smart idea.

We now have ready-made website builders and AI-driven site generation from plain descriptions. Where front-end development by a human being was important, it has become automated and reproducible. We have moved from "code by people for people" to "code by machines for machines."

Technology is not important, syntax is not important, patterns are meaningless; what matters is the idea.

The wrong question

For a while, I was asking myself: "How do I stay relevant?" — It's the wrong question.

It is assumed that relevance is something you develop within yourself by staying up to date with developments:

  • learning a new framework
  • mastering a new model
  • learning a new language

It's a treadmill. And the speed only increases.

I'm going to say something that might hurt developers' feelings, but you will need to keep learning throughout your life to remain "marketable" to businesses. Relevant development skills last 3–5 years; then they become obsolete. Even your 3,000 tasks on LeetCode are just a set of numbers in your profile; AI is better than you at applying algorithms for working with arrays, hash tables, and other types of data.

The more I read about AI, the more I framed the problem as a competition. Me versus the model.

My 8 years of experience versus a prompt that produces the same output in thirty seconds. Framed that way, I lose. Everyone loses. That's not an interesting problem to solve.

I stopped asking "how do I compete?"

I started asking, "What actually goes wrong when nobody with my knowledge is in the room?"

Because something does go wrong. AI generates code, but it doesn't audit the result. It produces sites that look correct, pass basic checks, and ship broken in ways nobody noticed.

That's not a gap that more AI fills. That's a gap that requires a different kind of thinking — not generation, but judgment. And judgment, unlike syntax, doesn't depreciate on a three-year cycle.

Capitalizing on your own skills

The standard playbook for a developer who feels obsolete is predictable: pivot to mentoring, launch a course, rebrand as a consultant.

And I rejected it immediately.

All of these are the same dependency, just repackaged. You're still selling time. You're still replaceable. And the next model update will commoditize that too.

The real question isn't "how do I stay marketable?" It's "what do I know that neither AI nor current automated tools will catch for you."

These aren't exotic edge cases. They're the ordinary output of agencies, freelancers, and AI-generated sites that nobody audited with actual front-end knowledge. The people who commissioned the work don't know what to check. The tools they have access to don't surface these issues. And the developers who built it have already moved on.

This is the gap. Not "I know more than AI." But: AI generates, Lighthouse measures performance, and nobody is looking at the output the way a senior front-end developer would.

Expertise as a service

Assessment based on external characteristics

As a front-end developer, I always look at the website of the company I am interviewing with.

They may tell you about excellent conditions, a professional team, and other benefits, but these are just words.

And the public website is the result of their work. If the public website is bad, what can be said about the rest of the internal product development?

But checking every company website using several different tools is difficult and tedious. On the plus side, there are many checks that Lighthouse and other static analyzers don't highlight, and only developers are aware of them.

That's how the idea to create a service for "auditing" public websites came about. To combine all the best practices and immediately perform a comprehensive analysis with one click: "What are the problems and how can they be solved?"

But don't just say "the problem is bad"; explain to the user why it's a problem, what needs to be done, and how the solution will help.

First steps

I knew for sure that I didn't want to do something like a "wrapper project over AI." Not because I'm bad at understanding AI, but because it creates obvious vendor lock-in.

Our project is not working due to an OpenAI malfunction. We apologize for the inconvenience.

Also, it won't be a startup that requires investment rounds, development hustle, and so on. I need to develop something at my own pace that reflects me and the knowledge I have, but also protects me from copying.

What was the first thing that came to mind? Take the Lighthouse API, add a few static checks, and generate a large report by category with error categorization.

What happened in the first iteration:

  • Lighthouse API
  • W3C Validator API
  • Basic rules for OpenGraph validation

First version

No designer, no development team, no business analyst, no Customer Journey Map — just pure enthusiasm and a desire to prove to myself that my knowledge is not useless.

As you can see, at that point I had already created a "Dashboard" for storing history, a "Profile" for managing your data, and, of course, a "Subscription" for managing other people's money.

Yes, he was a businessman.

But at that stage, I didn't even explain what the mistake was, why it was bad, or how to fix it.

Obviously, simply creating a UI over standard free checks for the purpose of making money is not a great idea.

The initial motivation was enough to make a prototype, and the plan in my head worked, but it cannot be called a full-fledged product, because anyone can copy or vibe-code something similar.

The checks should primarily be based on my knowledge, which reinforces or supplements the usual code validators.

Development based on real needs, and almost the first customer

It so happened that my friend and ex-colleague was creating a personal website at the same time. He wasn't a developer; he was a QA engineer. He had minimal front-end knowledge, just his idea, a few design sketches on paper, and Gemini 3 and Claude Opus 4.5 to generate the site.

A distinctive feature of his website was that it was written entirely in native HTML, CSS, and JS. No trendy tools, builders, or optimizers. It's hard to imagine a more difficult way to build a website from scratch without any front-end knowledge.

Coincidence? Luck? Fate? — I don't know.

It so happened that I needed to automate the expert assessment process, and there was a person who needed help with the development, without asking AI, but getting an accurate answer about where the problem lies and what to do about it.

If we disregard SEO validations, which should be 100% in the product, several audits are always found in separate tools, but never together.

OpenGraph

The first problem that needed to be solved was, "How will the website look if it is shared on social media?"

Obviously, a preview was needed, and something similar to opengraph.xyz, with the distinctive feature of "we don't just talk about mistakes, we also make decisions right away in one place."

The standard solution is a parser, verification, output of failed checks, a little context, and providing fallback elements. It was simple.

The number of checks increased, and it was impossible to simply place them in one long list, especially with previews.

webvalid.dev opengraph preview

This is how the first draft for the Summary Preview came about.

Thanks to Apple for the tip

In November 2025, a critical problem was discovered for the App Store web client. All of its source code maps were found together with the code. Anyone could see how Apple works, what methods it uses, and how it organizes its code base.

webvalid.dev security module intro

App Store web has exposed all its source code

Naturally, for an engineer like me, who is used to nitpicking every minor bug, this was a gold mine for analysis.

That's how the idea to add a "Security" module came about — but not just to search for source.map, but to search for all possible "forgotten" tokens. What GitHub parsers do to search for committed API keys, I started doing for public sites.

There are more details now, and new entities in the form of "files" have appeared. This is not just a review of an HTML document; it is a complete analysis of the build that users receive.

webvalid.dev - security hex-secret

Example of detecting a suspicious string

It became clear that the old format of "here's the whole long list" wasn't really suitable for a quick overview of the issues. We needed a new visual representation that showed what we were working with and where the problem came from.

Updated interface for working with security audits

WebValid.dev — Example of detecting a JWT token in a build on one of the real websites

Style validation

The main problem with using AI in front-end development is that it cannot "see" the results it generates. AI is used to develop based on structural approaches such as FSD and Tailwind. Without them, AI becomes blind. The context of elements, their styles, and future sizes can easily be lost on it.

Just as an example, this is what the adaptive looked like according to Claude Opus 4.5.

webvalid.dev - AI styles generation

For a developer, this code is terrifying. For the average person, it's incomprehensible — but perhaps that's how it should be? This is just a few kilobytes added to the final size of the application, with no consequences, unless we consider the development of the site over the next 3–6 months.

Did this fit the needs of a profile-website? Not really. The profile-website is a one-time event: "made, published, and forgotten."

But !important is something that can be caught immediately as a starting point for style validation. Of course, auditing !important or z-index alone does not carry the same weight as "fix it or lose money," but if you add more meaningful checks — such as Unsupported syntax, Coverage, or SRI — then style auditing becomes just as important as searching for forgotten tokens.

Closed demo

Once the technical part was ready, I thought, "What if I bring the development out of the shadows and finally tell everyone what I've been doing for the past month?"

What was available at that time:

Modules:

  • Performance
  • OpenGraph
  • Network
  • Security
  • CSS Styles
  • HTML + A11Y validation
  • Technical SEO

Plus a range of basic functionality: launch history, localization into 2 languages, audit statistics for a website or domain, authentication, subscription management, and RBAC.

As a developer, you try to make the code and application as good as possible, even in your own project. The more you work on a project in focus, the harder it is to detach yourself from your own "I" and look at what you are doing from the outside.

This is a real problem if you don't have an outside perspective.

webvalid.dev - product history demo perspective

Focus group

The developer's environment consists mainly of other developers. And every developer understands how things work in browsers and on the internet.

Ten 1-hour calls. No hints, just observation and answering questions. As soon as we went through the first scenarios and everything was relatively clear, I added context about "what works, how, and why."

What did I learn?

  • A list of more than 50 errors is just "noise" to a person. Problems need to be prioritized.
  • An "audit score" is just a number. A value of 0 (out of 100) is not perceived as a problem.
  • Security — interesting to many.
  • I missed cases where a public website is an iframe widget or a Telegram mini-app.
  • Large websites cannot be checked because they are under re-captcha.
  • The developer wants auto-checking, which means integration with CI/CD is needed.
  • There was an "Ask AI" button that helped narrow the problem and solution when general recommendations were insufficient, but it was used only once.
  • My product is disposable, and the release cycle for websites is limited to 2–3 launches. Once it's fixed, it's forgotten.
  • UX and UI matter more than the number of features or product quality. If the interface doesn't tell you where to click, be prepared to do onboarding and tooltips.
  • Dividing into sections is not ideal. Some individuals require a comprehensive list of issues, ranging from critical to insignificant.
  • Technical descriptions and terms. Yes, the developers understood this, but QA engineers and others did not.
  • Developers looked at errors, leads looked at graphs, and managers only looked at what affects SEO.

"What difference does it make that it's a mistake if the company gets potential customers?"

"There's already a Lighthouse, what's new?"

"I don't understand what the correction will achieve?!"

And at this stage, it is possible to cancel everything

Because I have created something unnecessary.

The knowledge and context I provided were obvious to the developers. Most people work in closed systems, and access to the internal system involves security policies and raw code processing.

I asked the wrong people

What problem was I initially trying to solve?

You trust designers, agencies, or AI to create your website without understanding the technology, fully confident that everything is correct because you were told so. But what if I showed you that this is not the case? What if I showed you in detail how agencies/AI/designers are "deceiving" you?

And how do developers and commercial SPA development intersect with this problem? — Not at all.

I solved some people's problems while asking others for their opinions. People who were like me, but remained part of the system of executors, rather than decision-makers.

The same vector, but meaningful

Who am I trying to help?

  • People who develop websites themselves with or without AI
  • Agents to verify their work
  • Freelance developers
  • Customers who want to independently verify the result

What should I include in the product?

  • Clear context of the problem, as well as corrective measures
  • Description of the problem for non-technical specialists, classification of problems, and the impact of each problem
  • Seamless fix option for vibe-coders
  • Tracking issues remotely to support websites after delivery
  • Prevent problems before pages are indexed by Google

What did I do?

  • Each issue contains a description of the problem, a preview of the location of the problem, and the impact of the fix
  • Division of technical and product reports into "by audit" and "by category"
  • MCP server for local development that works with the build without interfering with the source code
  • Generate prompt for AI — a set of ready-made adaptive prompts for AI. For when you don't want to install an MCP server
  • Schedule of checks and thresholds by severity of problems
  • User notifications by email
  • Sharing the audit for review by other participants
  • PDF reports for users and customers

I have separated the product from general access and am identifying the real needs of users in private conversations, rather than through metrics.

Lighthouse is a solid foundation, but I go further: I discuss which errors matter and how to prioritize them to resolve them.

This is how the closed beta test of my product for my circle of friends and a publicly accessible page appeared: anyone can try a part of the product and, of course, sign up for beta testing.

Free Website Audit: HTML, SEO & JS Security

Run a free 10-second website audit. Validate HTML/CSS, scan for leaked JS secrets, and get structured AI context for Cursor or Copilot. No registration required

favicon webvalid.dev

Keys are released gradually after major updates to avoid confusing the main tasks and issues. But you can already try part of the product for free.

This is not a conclusion, this is an opening

I didn't write this to prove anything, not to myself, not to you.

I wrote it because somewhere in this story — in the apathy, in the wrong questions, in the focus group that told me I was solving the wrong problem — you might recognize something familiar. A moment where you looked at what you built and weren't sure it mattered. Or a moment where you looked at what someone else built for you, and quietly suspected something was wrong, but didn't know what to call it.

That recognition is exactly what I'm looking for.

Not investors. Not users who will silently churn after a free trial. People who read this far — which means you're either a developer who's asked the same uncomfortable questions, or someone who commissions or ships websites and knows the gap between "it looks fine" and "it actually works" better than you'd like to admit. Either way, I want to hear from you.

Not "great idea, keep going" — that's noise.

Tell me:

  • Where does the logic break?
  • What problem have I described that you've actually felt?
  • What am I still getting wrong?
  • Does the product sound useful — or like something you'd close after thirty seconds?

WebValid is still being shaped. The person who shapes it is me — and, if you're willing, the next person is you.

If anything in this resonated — reach out. One message. No commitment. Just a conversation between people who care about whether things are built correctly.

I will leave it as an example

Full Website Audit | WebValid

Comprehensive scan of SEO, Security, Markup, and Performance examai.lovable.app

favicon webvalid.dev

Top comments (0)