DEV Community

Cover image for I built 44 free tools with no npm, no backend, no tracking
RuBekOn
RuBekOn

Posted on • Originally published at atomnyx.com

I built 44 free tools with no npm, no backend, no tracking

Every time I needed a basic online tool - QR generator, PDF editor, image compressor - the first page of Google gave me three options:

  1. Paywall after one use
  2. Upload your file to some sketchy server
  3. So many ads the tool barely worked

So I built AtomnyX - 44 free browser tools (plus 34 side-by-side AI tool comparisons) that run 100% client-side. No signup, no tracking, no ads.

The interesting part isn't what I built. It's the stack: no npm, no framework, no build step, no backend server.

Here's how.

The stack (or lack of it)

  • Frontend: Vanilla HTML, CSS, JavaScript. No React, no Vue, no bundler.
  • Hosting: Netlify (free tier), deploys via git push
  • Backend (for content): Firebase Firestore - only for the blog and glossary
  • Tools: 100% client-side using Canvas API, Web Crypto, File API, PDF.js
  • Routing: Netlify _redirects for slug-based URLs
  • Dynamic stuff: Netlify Edge Functions (Deno) for sitemap, RSS, article meta injection

Every tool page is just an HTML file with an inline <script> tag. That's it.

Why no build step?

Every time I've picked up a "simple" side project and chosen React + Vite + TailwindCSS, I lose two weekends to config before I write a single useful line.

For a tools site, a build step is pure overhead:

  • Each tool is independent
  • No shared state between pages
  • The "app" is the tool itself
  • SEO wants server-rendered (or pre-rendered) HTML anyway

When adding a new tool, the workflow is:
Create /tools/new-tool.html
Write the UI + logic inline
git push
Done (live in ~45 seconds)
No npm install. No webpack config. No "why is my dev server broken again."

Clean URLs without a framework

Netlify's _redirects file handles all the routing magic. For tools, I use:
Keep index accessible at /tools/
/tools/index.html /tools/index.html 200

301 old .html URLs to clean URLs (SEO safety)
/tools/*.html /tools/:splat 301

Serve clean URLs by rewriting to the .html file
/tools/* /tools/:splat.html 200

Now /tools/qr-code-generator serves /tools/qr-code-generator.html without the user ever seeing the extension. Old .html URLs 301 to clean ones, so I don't lose any existing link equity.

Service worker for offline tools

Most of my tools don't need the internet after the first load. That's a perfect fit for a service worker:


javascript
// sw.js
const CACHE_NAME = 'atomnyx-v1';
const TOOLS_PAGES = [
  '/tools/',
  '/tools/qr-code-generator',
  '/tools/password-generator',
  '/tools/pdf-editor',
  // ...44 more
];
self.addEventListener('install', event => {
  event.waitUntil(
    caches.open(CACHE_NAME).then(cache => cache.addAll(TOOLS_PAGES))
  );
});
self.addEventListener('fetch', event => {
  // Cache-first for tool pages (offline support)
  // Network-first for everything else
  if (event.request.url.includes('/tools/')) {
    event.respondWith(
      caches.match(event.request).then(res => res || fetch(event.request))
    );
  }
});
After one visit, my QR generator works in airplane mode. Small win, huge UX difference.

Dynamic sitemap via Netlify Edge Function
I have a blog backed by Firestore. New articles get published constantly. Maintaining a static sitemap.xml by hand is a game I will always lose.

Enter Netlify Edge Functions (Deno runtime, runs at the edge, replaces the static file response on the fly):

// netlify/edge-functions/sitemap.js
export default async function handler(request, context) {
  // Fetch published articles from Firestore REST API
  const articles = await fetchFromFirestore();

  const urls = [
    ...STATIC_PAGES,
    ...articles.map(a => ({
      url: `https://atomnyx.com/blog/${a.slug}`,
      lastmod: a.updatedAt,
      changefreq: 'weekly',
      priority: '0.9'
    }))
  ];

  const xml = `<?xml version="1.0"?>
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
      ${urls.map(u => `<url><loc>${u.url}</loc><lastmod>${u.lastmod}</lastmod></url>`).join('')}
    </urlset>`;

  return new Response(xml, {
    headers: {
      'Content-Type': 'application/xml',
      'Cache-Control': 'public, max-age=3600, stale-while-revalidate=86400'
    }
  });
}
Configured in netlify.toml:

[[edge_functions]]
  path = "/sitemap.xml"
  function = "sitemap"
Every crawler that hits /sitemap.xml gets a fresh, live sitemap. No manual update. No build-time generation. 1-hour edge cache means Firestore barely notices.

A real debugging story: the QR generator
I wrote a QR code generator in 20 minutes using the qrcode-generator library. Worked in local testing. Deployed. User reported: "it's broken - says data too long for every URL."

Opened the live page, ran this in the console:

var ECC_MAP = { 'L': 1, 'M': 0, 'Q': 3, 'H': 2 };

// Test every combination
['numeric 0', 'numeric 1', 'string L', 'string M'].forEach(label => {
  try {
    var qr = qrcode(4, /* ecc */);
    qr.addData('hello');
    qr.make();
    console.log(label, 'SUCCESS');
  } catch(e) {
    console.log(label, 'FAIL:', e);
  }
});
Result:

numeric 0: FAIL - bad rs block @ typeNumber:4/errorCorrectionLevel:undefined
numeric 1: FAIL - bad rs block @ typeNumber:4/errorCorrectionLevel:undefined
string L:  SUCCESS
string M:  SUCCESS
The library's docs said "error correction level: 0-3" — but the actual implementation only accepted string values ('L', 'M', 'Q', 'H'). My ECC_MAP was converting strings to numbers that the library couldn't parse.

The fix was one line:

- var eccLevel = ECC_MAP[eccSelect.value];
+ var eccLevel = eccSelect.value;  // pass string directly
Lesson: always test a library with the exact input types you'll be passing, not what the docs imply. Especially with older JS libraries where "enum" often means "string or number, who knows."

Auto-submitting to IndexNow on every deploy
I wrote a Python script that POSTs all my URLs to IndexNow (Bing + Yandex) so they crawl new content within minutes instead of weeks. Then I realized I'd always forget to run it.

Netlify Build Plugins to the rescue:

// netlify/plugins/indexnow-submit/index.js
module.exports = {
  onSuccess: async ({ utils }) => {
    if (process.env.CONTEXT !== 'production') return;  // skip previews
    try {
      await utils.run.command('python3 indexnow-submit.py');
    } catch (err) {
      console.warn(`[IndexNow] Non-blocking failure: ${err.message}`);
    }
  }
};
Now every git push = automatic ping to search engines. Zero manual work.

What I'd do differently
1. JSON-LD structured data from day one.
I added it in month 4. Should've done it from day one. Rich results in Google = 2-3x click-through rate for the same ranking position.

2. Opinionated design system sooner.
I let CSS sprawl for too long. Eventually had to do a pass to replace all hardcoded hex colors with CSS custom properties for dark/light mode support. Should have started with CSS variables.

3. Analytics from day one.
I didn't track anything for the first month. Now I have no baseline to compare against. Set up GA4 + Plausible before you ship anything.

4. Service worker earlier.
I added it month 5. Tools work offline now, but if I'd had it from launch, all my early users would've had a much better first impression.

What's next
Auto-generated OG images per article (edge function rendering to SVG → PNG)
An n8n pipeline that generates article drafts from tech news RSS
More compare pages (I've written 34, targeting 100)
A Chrome extension wrapper for the top 3 tools
Maybe open-sourcing the whole thing
The site
If you want to poke at any of this:

Site: atomnyx.com
Tools: atomnyx.com/tools
AI tool comparisons: atomnyx.com/compare
Happy to answer questions about any of the technical choices in the comments. The "no npm, no framework" thing is either a fun constraint or heresy depending on who you ask - I'd love to hear both sides.
Enter fullscreen mode Exit fullscreen mode

Top comments (0)