<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Om Dongaonkar</title>
    <description>The latest articles on DEV Community by Om Dongaonkar (@omdongaonkar03).</description>
    <link>https://dev.to/omdongaonkar03</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/omdongaonkar03"/>
    <language>en</language>
    <item>
      <title>What Happens When You Move From Shared Hosting to a VPS</title>
      <dc:creator>Om Dongaonkar</dc:creator>
      <pubDate>Tue, 10 Mar 2026 11:06:23 +0000</pubDate>
      <link>https://dev.to/omdongaonkar03/what-happens-when-you-move-from-shared-hosting-to-a-vps-3ckb</link>
      <guid>https://dev.to/omdongaonkar03/what-happens-when-you-move-from-shared-hosting-to-a-vps-3ckb</guid>
      <description>&lt;p&gt;Micrologs v1 was deliberately constrained. PHP + MySQL, no Redis, no background workers, no VPS required. The pitch was: drop one file on your $2/month shared host and start tracking. That worked. v1.3.1 is the stable shared hosting release and it handles ~10,000 pageviews/day without complaint.&lt;/p&gt;

&lt;p&gt;But somewhere around v1.3.1, I started thinking about what happened when you removed those constraints.&lt;/p&gt;

&lt;p&gt;Not "rewrite in Go" or "switch to Postgres." Same PHP. Same MySQL. Just a VPS — which means persistent processes, background workers, and access to a fast in-memory store like Valkey.&lt;/p&gt;

&lt;p&gt;The ceiling went from ~10,000 pageviews/day to ~500,000 on a single node. With proper load balancing and node management, that number climbs to ~2.4M. Here's what actually changed and why.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real constraint on shared hosting isn't the hardware
&lt;/h2&gt;

&lt;p&gt;On shared hosting, PHP starts fresh on every request, does its work, and dies. No state between requests. No background processes. No way to keep anything alive between calls.&lt;/p&gt;

&lt;p&gt;This means every single thing that needs to happen for a pageview — auth, geolocation, user agent parsing, database writes — has to happen synchronously, inside that one request, while the user's browser waits for a response.&lt;/p&gt;

&lt;p&gt;On a quiet site, that's fine. Under real load, it becomes the ceiling. PHP-FPM workers pile up waiting on database queries. The connection pool saturates. Everything slows down together.&lt;/p&gt;

&lt;p&gt;The problem isn't that shared hosting is slow. It's that the synchronous-everything model has nowhere to go. You can't offload work anywhere, because there's nowhere to offload it to.&lt;/p&gt;

&lt;p&gt;On a VPS, that changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  What a VPS actually unlocks
&lt;/h2&gt;

&lt;p&gt;Two things specifically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent processes.&lt;/strong&gt; PHP-FPM workers are long-lived on a VPS. A cached value in process memory stays cached. A Valkey connection stays open. A background worker keeps running. You stop paying the cold-start cost on every single request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Background work.&lt;/strong&gt; You can run Supervisor. You can keep worker processes alive. You can have things happening in the background that have nothing to do with the current HTTP request. This is the big one.&lt;/p&gt;

&lt;p&gt;Micrologs v2 uses both.&lt;/p&gt;




&lt;h2&gt;
  
  
  The shift: stop doing everything in the request cycle
&lt;/h2&gt;

&lt;p&gt;In v1, a single pageview triggered the full stack synchronously — geolocation, user agent parsing, session resolution, database writes — all of it blocking, all of it happening while the browser waited.&lt;/p&gt;

&lt;p&gt;In v2, the tracking endpoint does almost nothing. It validates auth, hashes the IP, and pushes the raw payload to a Valkey queue. That's it. Response goes back to the browser in ~2–5ms.&lt;/p&gt;

&lt;p&gt;The actual work — GeoIP lookup, UA parsing, all the database writes — happens in a background worker that pulls from the queue and processes jobs completely off the HTTP request cycle.&lt;/p&gt;

&lt;p&gt;The data that ends up in the database is identical to v1. The enrichment still happens. It just happens after the response, not before it.&lt;/p&gt;

&lt;p&gt;This one shift is responsible for most of the throughput gain. A PHP-FPM worker that responds in 2ms can handle ~30,000 requests/minute. One that responds in 150ms handles ~400. That gap is the ceiling difference between v1 and v2.&lt;/p&gt;




&lt;h2&gt;
  
  
  Caching on the read side
&lt;/h2&gt;

&lt;p&gt;The queue handles writes. Valkey also handles reads.&lt;/p&gt;

&lt;p&gt;Analytics queries are expensive — they touch a lot of rows. In v1, every analytics request hit MySQL directly. If you're polling a dashboard every 30 seconds, you're running that query every 30 seconds.&lt;/p&gt;

&lt;p&gt;In v2, the first request hits MySQL and the result is cached in Valkey with a TTL. Every request within that window returns the cached result at ~2ms. Workers invalidate the cache when new data is written. Polling dashboards become nearly free.&lt;/p&gt;




&lt;h2&gt;
  
  
  Schema cleanup
&lt;/h2&gt;

&lt;p&gt;This one doesn't make headlines but it compounds over time.&lt;/p&gt;

&lt;p&gt;Redundant indexes, dead indexes from old query paths, composite indexes that could be consolidated — v2.1.0 was a cleanup pass on all of it. The DB sees less write amplification per INSERT and the indexes stay leaner as the dataset grows. On a small dataset these gains are invisible. On a table with millions of rows under sustained write load, they matter.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the numbers look like
&lt;/h2&gt;

&lt;p&gt;Stress test: 100 virtual users, 60 seconds, Docker on WSL2.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before (v2.1.0)&lt;/th&gt;
&lt;th&gt;After (v2.2.0)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requests/minute&lt;/td&gt;
&lt;td&gt;~1,300&lt;/td&gt;
&lt;td&gt;~16,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p95 latency&lt;/td&gt;
&lt;td&gt;~6,000ms&lt;/td&gt;
&lt;td&gt;~400ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error rate&lt;/td&gt;
&lt;td&gt;high&lt;/td&gt;
&lt;td&gt;~2% (WSL2 TCP noise)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The ~2% error rate is from WSL2's virtualisation layer, not the application. On a real Linux VPS, p99 &amp;lt; 10ms is expected.&lt;/p&gt;




&lt;h2&gt;
  
  
  The ceiling, explained
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;v1.3.1 (shared hosting):&lt;/strong&gt; ~10,000 pageviews/day. Synchronous everything. Runs on $2/month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v2.2.0 (single VPS node):&lt;/strong&gt; ~500,000 pageviews/day. Async queue, background workers, analytics cache.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v2.2.0 (with load balancing + multiple nodes):&lt;/strong&gt; Theoretically unbounded. The async architecture is horizontally scalable by design — stateless tracking endpoint, workers pulling from a shared queue. How far it scales depends entirely on how much infrastructure you throw at it. Adding nodes doesn't require redesigning anything.&lt;/p&gt;




&lt;h2&gt;
  
  
  There's still a lot left to do
&lt;/h2&gt;

&lt;p&gt;Honest caveat: v2 is not a fully optimized system. It's a meaningfully better architecture than v1, and the numbers show that. But there are optimizations I haven't touched yet — better worker concurrency tuning, smarter cache invalidation, connection pooling, more granular queue prioritization.&lt;/p&gt;

&lt;p&gt;I'll get to them as I learn more. That's the honest state of it. Every version of Micrologs has been the same loop: build it, ship it, review it under real conditions, find what breaks, fix it. v2 is no different. The 500k ceiling is real today. What it looks like in v3 depends on what I learn between now and then.&lt;/p&gt;




&lt;h2&gt;
  
  
  The project
&lt;/h2&gt;

&lt;p&gt;Micrologs is MIT licensed. v1.3.1 is the stable shared hosting release. v2.2.0 is the current stable VPS release.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/OmDongaonkar03/Micrologs" rel="noopener noreferrer"&gt;github.com/OmDongaonkar03/Micrologs&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>performance</category>
      <category>architecture</category>
      <category>webdev</category>
      <category>php</category>
    </item>
    <item>
      <title>Your portfolio site is probably broken in ways you haven't checked</title>
      <dc:creator>Om Dongaonkar</dc:creator>
      <pubDate>Wed, 04 Mar 2026 21:20:51 +0000</pubDate>
      <link>https://dev.to/omdongaonkar03/your-portfolio-site-is-probably-broken-in-ways-you-havent-checked-1281</link>
      <guid>https://dev.to/omdongaonkar03/your-portfolio-site-is-probably-broken-in-ways-you-havent-checked-1281</guid>
      <description>&lt;p&gt;Most developers spend weeks building their portfolio. The design, the animations, the perfect copy. Then they deploy it and never look at it again.&lt;/p&gt;

&lt;p&gt;I did the same thing. Until this week, when I actually checked mine properly for the first time.&lt;/p&gt;

&lt;p&gt;Here's what I found - and why your site probably has the same issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Audit
&lt;/h2&gt;

&lt;p&gt;I ran two checks. One for performance, one for security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance:&lt;/strong&gt; Chrome DevTools → Network tab, throttled to Fast 3G.&lt;br&gt;
&lt;strong&gt;Security:&lt;/strong&gt; &lt;a href="https://securityheaders.com" rel="noopener noreferrer"&gt;securityheaders.com&lt;/a&gt; - paste your URL, get a grade.&lt;/p&gt;

&lt;p&gt;My results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load time: &lt;strong&gt;4.27s&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Data transferred: &lt;strong&gt;227 kB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Security grade: &lt;strong&gt;D&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not catastrophic. But not good for a site whose entire purpose is to make a first impression on a hiring manager or potential client.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Performance Problem
&lt;/h2&gt;

&lt;p&gt;The culprit was immediately obvious in the Network tab: &lt;strong&gt;16 separate PNG requests&lt;/strong&gt; just for favicons and icons.&lt;/p&gt;

&lt;p&gt;Every request has overhead - DNS lookup, TCP handshake, HTTP round trip. 16 small image requests is worse than 1 slightly larger one.&lt;/p&gt;

&lt;p&gt;The fix: replace them with inline SVGs directly in the component. No network requests at all.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Before - 16 separate image requests&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/icons/github.png"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt; &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/icons/linkedin.png"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="c1"&gt;// ... 14 more&lt;/span&gt;

&lt;span class="c1"&gt;// After - zero network requests&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;svg&lt;/span&gt; &lt;span class="na"&gt;viewBox&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"0 0 24 24"&lt;/span&gt; &lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;path&lt;/span&gt; &lt;span class="na"&gt;d&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"..."&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;svg&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second fix: &lt;strong&gt;lazy load everything below the fold.&lt;/strong&gt; React makes this trivial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Suspense&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Projects&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./sections/Projects&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Contact&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./sections/Contact&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c1"&gt;// In your JSX&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Suspense&lt;/span&gt; &lt;span class="na"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Projects&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Contact&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Suspense&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The browser now only loads what the user can actually see. Everything else loads as they scroll.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result: 4.27s → 2.00s. 227 kB → 17.7 kB transferred.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Same site. Same content. Half the load time, 92% less data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Security Problem
&lt;/h2&gt;

&lt;p&gt;A D grade on securityheaders.com means your site is missing HTTP security headers. These are response headers your server sends that tell the browser how to behave - what it can load, where it can connect, whether it can be embedded in an iframe.&lt;/p&gt;

&lt;p&gt;Missing them doesn't break your site. But it means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your site can be embedded in an iframe on another domain (clickjacking)&lt;/li&gt;
&lt;li&gt;Browsers won't enforce HTTPS strictly&lt;/li&gt;
&lt;li&gt;No protection against content injection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fix for a Cloudflare Pages site is a single &lt;code&gt;_headers&lt;/code&gt; file in your &lt;code&gt;/public&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/*
  Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
  X-Frame-Options: SAMEORIGIN
  X-Content-Type-Options: nosniff
  Referrer-Policy: strict-origin-when-cross-origin
  Permissions-Policy: camera=(), microphone=(), geolocation=(), payment=()
  Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com; img-src 'self' data: https:; frame-ancestors 'none';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One file. No backend changes. No server config. Cloudflare picks it up on the next deploy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result: D → A on securityheaders.com.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built On Top
&lt;/h2&gt;

&lt;p&gt;After fixing the portfolio, I built a blog - this one, at &lt;a href="https://blogs.omdongaonkar.in" rel="noopener noreferrer"&gt;blogs.omdongaonkar.in&lt;/a&gt; - as a subdomain of the same domain. Vite + React, posts written in Markdown, auto-deployed to Cloudflare Pages on every push.&lt;/p&gt;

&lt;p&gt;Total additional cost: zero. The domain was already there. Cloudflare Pages is free. A subdomain is just a DNS record.&lt;/p&gt;

&lt;p&gt;The same &lt;code&gt;_headers&lt;/code&gt; file approach applies here too. Same security setup, same caching rules, consistent across both.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Actual Takeaway
&lt;/h2&gt;

&lt;p&gt;None of this was hard. The performance fixes took just 10 mins and security headers just took 20 minutes. The blog took a day.&lt;/p&gt;

&lt;p&gt;The reason most portfolio sites have these issues isn't lack of skill - it's that nobody checks. You build the thing, it looks good in the browser, you ship it and move on.&lt;/p&gt;

&lt;p&gt;Run securityheaders.com on your site right now. Open DevTools on a throttled connection. See what actually loads.&lt;/p&gt;

&lt;p&gt;You'll probably find the same things I did.&lt;/p&gt;

</description>
      <category>performance</category>
      <category>security</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Reviewed My Own Code Like I Was Trying to Break It</title>
      <dc:creator>Om Dongaonkar</dc:creator>
      <pubDate>Sun, 01 Mar 2026 18:08:40 +0000</pubDate>
      <link>https://dev.to/omdongaonkar03/i-reviewed-my-own-code-like-i-was-trying-to-break-it-2a2p</link>
      <guid>https://dev.to/omdongaonkar03/i-reviewed-my-own-code-like-i-was-trying-to-break-it-2a2p</guid>
      <description>&lt;p&gt;Last week I built and shipped Micrologs - a self-hostable analytics and error tracking engine that runs on shared hosting. PHP + MySQL. No Redis, no VPS, no Docker. &lt;a href="https://dev.to/omdongaonkar03/built-a-self-hostable-plausible-sentry-alternative-in-one-day-2o9m"&gt;That post covers how it's built and why.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This post is about what happened the day after I shipped it.&lt;/p&gt;

&lt;p&gt;I did a full security and performance review on my own code. Treated it like I was a senior engineer seeing it for the first time, trying to find every way it could fail under real traffic.&lt;/p&gt;

&lt;p&gt;I found 5 real issues. None exotic. All fixable. All the kind of thing that doesn't hurt at 100 visits/day and quietly breaks at 10,000.&lt;/p&gt;

&lt;p&gt;Here's exactly what I found and how I fixed each one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Issue 1 - Blind trust of X-Forwarded-For
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;getClientIp&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;empty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$_SERVER&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"HTTP_X_FORWARDED_FOR"&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;$ips&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;explode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;","&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$_SERVER&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"HTTP_X_FORWARDED_FOR"&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ips&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt; &lt;span class="c1"&gt;// ← anyone can spoof this&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nv"&gt;$_SERVER&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"REMOTE_ADDR"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;X-Forwarded-For&lt;/code&gt; is a header that proxies add to tell your app the real client IP. The problem: it's just a header. Any client can send their own:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X-Forwarded-For: 127.0.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your rate limiter thinks they're localhost. They can make unlimited requests. Your GeoIP lookup gets a fake IP. Your IP hash is poisoned. The fix costs an attacker zero effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;REMOTE_ADDR&lt;/code&gt; is set by the TCP connection itself - it cannot be spoofed. &lt;code&gt;X-Forwarded-For&lt;/code&gt; can. So only trust it when &lt;code&gt;REMOTE_ADDR&lt;/code&gt; is a proxy you control:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;getClientIp&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$remoteAddr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$_SERVER&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"REMOTE_ADDR"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nv"&gt;$trustedProxies&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;defined&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"TRUSTED_PROXIES"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="no"&gt;TRUSTED_PROXIES&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;$trustedProxies&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;array_map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"trim"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;explode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;","&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;TRUSTED_PROXIES&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;empty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$trustedProxies&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;in_array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$remoteAddr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$trustedProxies&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;empty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$_SERVER&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"HTTP_X_FORWARDED_FOR"&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nv"&gt;$ips&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;array_map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"trim"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;explode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;","&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$_SERVER&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"HTTP_X_FORWARDED_FOR"&lt;/span&gt;&lt;span class="p"&gt;]));&lt;/span&gt;
            &lt;span class="nv"&gt;$clientIp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$ips&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;filter_var&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$clientIp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;FILTER_VALIDATE_IP&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nv"&gt;$clientIp&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nv"&gt;$remoteAddr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On shared hosting with no proxy in front - set &lt;code&gt;TRUSTED_PROXIES&lt;/code&gt; to empty and &lt;code&gt;XFF&lt;/code&gt; is ignored entirely. On a VPS with Nginx in front of PHP-FPM - set it to &lt;code&gt;127.0.0.1&lt;/code&gt; and it works correctly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Issue 2 - No payload size cap
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="nv"&gt;$input&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;json_decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;file_get_contents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"php://input"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was in every single endpoint. &lt;code&gt;file_get_contents("php://input")&lt;/code&gt; reads the entire request body with no limit whatsoever. Send a 500MB POST - the server loads 500MB into memory. Do it 10 times concurrently - the server is out of memory and dead. This is a trivial DoS attack that requires zero skill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;readJsonBody()&lt;/code&gt; helper that hard-caps at 64KB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;readJsonBody&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nv"&gt;$maxBytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;65536&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kt"&gt;?array&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;file_get_contents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"php://input"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$maxBytes&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$raw&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nv"&gt;$raw&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;strlen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$maxBytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;sendResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"Payload too large"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;413&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nv"&gt;$decoded&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;json_decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$raw&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;is_array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$decoded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="nv"&gt;$decoded&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;64KB is enormous for any legitimate tracking payload - a pageview beacon is maybe 500 bytes. This cap only ever affects attackers. Every endpoint now calls &lt;code&gt;readJsonBody()&lt;/code&gt; instead of the raw &lt;code&gt;file_get_contents&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Issue 3 - Unbounded context field
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Error and audit endpoints accepted a &lt;code&gt;context&lt;/code&gt; field - arbitrary JSON the caller can attach to any event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="nv"&gt;$context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;isset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$input&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;is_array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$input&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="nb"&gt;json_encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$input&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A deeply nested 10MB JSON object sent as context would be encoded and pushed straight to the DB. Even with the payload cap from issue 2 in place, 64KB of deeply nested JSON can consume significantly more memory during &lt;code&gt;json_encode&lt;/code&gt; than its raw byte size suggests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Encode first, then check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;encodeContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$raw&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nv"&gt;$maxBytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8192&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kt"&gt;?string&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;isset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nb"&gt;is_array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$raw&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nv"&gt;$encoded&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;json_encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$raw&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;JSON_UNESCAPED_UNICODE&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="no"&gt;JSON_UNESCAPED_SLASHES&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$encoded&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;strlen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$encoded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$maxBytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// drop silently - the event still records, context doesn't&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nv"&gt;$encoded&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;8KB is plenty for any real error context. Oversized context is dropped silently - the error or audit event still gets recorded. You lose the context blob, not the event.&lt;/p&gt;




&lt;h2&gt;
  
  
  Issue 4 - 15 database queries per pageview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was the performance bottleneck. Every single pageview fired up to 15 sequential round-trips to MySQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT visitor → maybe INSERT visitor → UPDATE visitor last_seen
SELECT session → maybe INSERT session → UPDATE session last_activity
SELECT dedup check
SELECT location → maybe INSERT location
SELECT device   → maybe INSERT device
SELECT bounce count → maybe UPDATE bounce flag
INSERT pageview
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On shared hosting with 15–25 max DB connections, under 50 concurrent users these pile up and the connection pool saturates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix - &lt;code&gt;INSERT ... ON DUPLICATE KEY UPDATE&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This single MySQL statement does: "insert this row, but if a row with this unique key already exists, update it instead." One query instead of two, atomically.&lt;/p&gt;

&lt;p&gt;Visitor upsert:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;visitors&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;visitor_hash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fingerprint_hash&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;DUPLICATE&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt;
    &lt;span class="n"&gt;fingerprint_hash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;fingerprint_hash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fingerprint_hash&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="k"&gt;VALUES&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fingerprint_hash&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;fingerprint_hash&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;last_seen&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Session upsert:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;sessions&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;visitor_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;session_token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;DUPLICATE&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt;
    &lt;span class="n"&gt;last_activity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bounce flag - replaced a separate &lt;code&gt;COUNT(*)&lt;/code&gt; query + conditional UPDATE with a single conditional UPDATE using an inline subquery:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;sessions&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;is_bounced&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;is_bounced&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pageviews&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: 15 queries → 6–8 in the typical path for a returning visitor. The DB connection pool stays healthy under load.&lt;/p&gt;




&lt;h2&gt;
  
  
  Issue 5 - GeoIP file opened on every request
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="nv"&gt;$reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;\MaxMind\Db\Reader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$dbPath&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nv"&gt;$record&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$reader&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nv"&gt;$reader&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every tracking request opened the &lt;code&gt;.mmdb&lt;/code&gt; file, read it, and closed it. Opening a file means a filesystem call - finding it on disk, loading it into memory, initialising the reader object. On a busy endpoint, this was 20–80ms of overhead per request, every request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix - PHP static variables:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;static&lt;/code&gt; variable inside a function is initialised once and persists for the lifetime of that PHP-FPM worker process. Every subsequent call reuses it - no re-initialisation, no file open.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="nv"&gt;$reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$reader&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;\MaxMind\Db\Reader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$dbPath&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nv"&gt;$record&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$reader&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// no close() - PHP cleans it up at process shutdown&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First request: opens the file. Every request after that within the same worker process: reuses the open reader. Free performance, zero risk.&lt;/p&gt;




&lt;h2&gt;
  
  
  Two operational fixes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Log rotation&lt;/strong&gt; - &lt;code&gt;file_put_contents($logPath, $line, FILE_APPEND)&lt;/code&gt; appended forever. One busy month and you have a 2GB log file. Shared hosting has disk quotas. A silent disk full means your site stops working with no explanation.&lt;/p&gt;

&lt;p&gt;Fixed with shift rotation - before writing, check file size. If over 10MB, shift all existing files down (&lt;code&gt;.1&lt;/code&gt;→&lt;code&gt;.2&lt;/code&gt; ... up to &lt;code&gt;.5&lt;/code&gt;, oldest deleted) and start fresh. Max 60MB on disk, last 50MB of history always preserved. Pure PHP, no cron.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request ID in logs&lt;/strong&gt; - before this fix, logs under load looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[14:22:01] [ERROR] [pageview.php] DB insert failed
[14:22:01] [ERROR] [pageview.php] DB insert failed
[14:22:01] [ERROR] [pageview.php] DB insert failed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3 errors at the same second. Is this 1 request that failed 3 times, or 3 different requests that each failed once? You can't tell.&lt;/p&gt;

&lt;p&gt;Fixed by generating a unique 8-character ID per HTTP request at the top of &lt;code&gt;functions.php&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="nv"&gt;$GLOBALS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;substr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;bin2hex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;random_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;writeLog()&lt;/code&gt; includes it automatically. Now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[14:22:01] [ERROR] [a3f9c12b] [pageview.php] DB insert failed
[14:22:01] [ERROR] [a3f9c12b] [pageview.php] DB insert failed
[14:22:01] [INFO]  [f91c3d77] [pageview.php] pageview recorded
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same ID = same request. &lt;code&gt;grep "a3f9c12b" micrologs.log&lt;/code&gt; shows the complete story of that one request from start to finish.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this changed
&lt;/h2&gt;

&lt;p&gt;After all fixes, Micrologs handles &lt;strong&gt;~10,000 pageviews/day on a standard shared host&lt;/strong&gt;. No Redis, no queue, no VPS required.&lt;/p&gt;

&lt;p&gt;The five issues above aren't exotic bugs. They're the standard gap between "works in development" and "survives real traffic." None of them showed up in testing. All of them would have shown up under load.&lt;/p&gt;

&lt;p&gt;The lesson I keep re-learning: shipping v1 is step one. Reviewing it like you're trying to break it is step two. Most developers skip step two.&lt;/p&gt;




&lt;h2&gt;
  
  
  What shipped after
&lt;/h2&gt;

&lt;p&gt;This post covers v1.1.0. Since then:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.2.0&lt;/strong&gt; - Three new analytics endpoints using existing data, no schema changes: session duration and pages per session, new vs returning visitors, and error trends over time with daily breakdowns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v1.3.0&lt;/strong&gt; - Complete API coverage. Full project management (list, toggle, enable/disable, rotate keys, delete). Link edit and detail endpoints. Error group status updates — individually or in bulk, with a new &lt;code&gt;investigating&lt;/code&gt; status for the full &lt;code&gt;open → investigating → resolved / ignored&lt;/code&gt; workflow. One schema migration required for the new ENUM value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;@micrologs/node&lt;/code&gt; v1.0.0&lt;/strong&gt; - Official Node.js SDK on npm. Zero dependencies, Node 18+, silent on failure. Wraps every engine endpoint. &lt;code&gt;npm install @micrologs/node&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The project
&lt;/h2&gt;

&lt;p&gt;Micrologs is MIT licensed and open source.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/OmDongaonkar03/Micrologs" rel="noopener noreferrer"&gt;github.com/OmDongaonkar03/Micrologs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Node SDK: &lt;a href="https://www.npmjs.com/package/@micrologs/node" rel="noopener noreferrer"&gt;npmjs.com/package/@micrologs/node&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Python and Laravel SDKs are the open contribution if that's your stack.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>security</category>
      <category>webdev</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Built a Self-Hostable Plausible + Sentry Alternative in One Day</title>
      <dc:creator>Om Dongaonkar</dc:creator>
      <pubDate>Fri, 27 Feb 2026 21:29:46 +0000</pubDate>
      <link>https://dev.to/omdongaonkar03/built-a-self-hostable-plausible-sentry-alternative-in-one-day-2o9m</link>
      <guid>https://dev.to/omdongaonkar03/built-a-self-hostable-plausible-sentry-alternative-in-one-day-2o9m</guid>
      <description>&lt;h1&gt;
  
  
  I Built a Self-Hostable Plausible + Sentry Alternative in One Day - That Runs on Shared Hosting
&lt;/h1&gt;

&lt;p&gt;I worked on projects that run on shared hosting. PHP + MySQL, no root access. It's a real environment that a lot of developers actually ship to - and almost no tooling is built for it.&lt;/p&gt;

&lt;p&gt;But every analytics or error tracking tool I looked at assumed you had a VPS at minimum, or were happy paying a SaaS bill every month. Plausible is great - but it's paid per site, and self-hosting it means Docker, which means a VPS. Sentry's free tier is generous until it isn't. And there was a hard requirement: no third-party services touching user data.&lt;/p&gt;

&lt;p&gt;So I built one. In a day. It's called &lt;strong&gt;Micrologs&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Drop one script tag on your site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script
  &lt;/span&gt;&lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://yourdomain.com/snippet/micrologs.js"&lt;/span&gt;
  &lt;span class="na"&gt;data-public-key=&lt;/span&gt;&lt;span class="s"&gt;"your_public_key"&lt;/span&gt;
  &lt;span class="na"&gt;data-environment=&lt;/span&gt;&lt;span class="s"&gt;"production"&lt;/span&gt;
  &lt;span class="na"&gt;async&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From that point, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pageviews, sessions, unique visitors, bounce rate&lt;/li&gt;
&lt;li&gt;Country / region / city breakdown (via local MaxMind GeoLite2 — no runtime API calls)&lt;/li&gt;
&lt;li&gt;Device, OS, browser breakdown&lt;/li&gt;
&lt;li&gt;Referrer categorization — organic, social, email, referral, direct&lt;/li&gt;
&lt;li&gt;UTM campaign tracking&lt;/li&gt;
&lt;li&gt;JS errors auto-caught — &lt;code&gt;window.onerror&lt;/code&gt; and &lt;code&gt;unhandledrejection&lt;/code&gt;, grouped by fingerprint&lt;/li&gt;
&lt;li&gt;Manual error tracking from any backend over a single HTTP call&lt;/li&gt;
&lt;li&gt;Audit logging&lt;/li&gt;
&lt;li&gt;Tracked link shortener with click analytics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of it hits your own database. Nothing leaves your server.&lt;/p&gt;




&lt;h2&gt;
  
  
  The constraint that shaped everything
&lt;/h2&gt;

&lt;p&gt;Shared hosting means no Redis, no background workers, no daemons, no WebSockets. You get PHP and MySQL and that's it.&lt;/p&gt;

&lt;p&gt;This forced some interesting decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate limiting without Redis.&lt;/strong&gt; Most rate limiters use Redis or a DB table with a timestamp. I went file-based — append-only &lt;code&gt;.req&lt;/code&gt; files, one per request, inside a per-IP directory. Counting recent requests = counting files newer than the window. No locking, no DB writes on every request, works on any host.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Count recent attempts — each file is one request&lt;/span&gt;
&lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$userDir&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"/*.req"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nb"&gt;filemtime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="nv"&gt;$windowSeconds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;$attempts&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="nb"&gt;unlink&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// clean up expired ones while we're here&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cleanup runs probabilistically — 1% chance on each request. No cron job needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Geolocation without API calls.&lt;/strong&gt; MaxMind's GeoLite2 database ships as a &lt;code&gt;.mmdb&lt;/code&gt; file you drop on your server. Every lookup is local. Zero latency overhead, zero external dependency at runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visitor identification without being creepy.&lt;/strong&gt; No raw IPs stored, ever. IPs are SHA-256 hashed with a salt immediately on ingestion. For visitor tracking, a UUID is stored in a cookie for 365 days. If the cookie gets cleared, a canvas fingerprint kicks in as a fallback to re-identify the same visitor — then re-associates the new cookie hash so the next visit is seamless again.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture is designed in stages
&lt;/h2&gt;

&lt;p&gt;v1 is a clean REST API — works on shared hosting, zero extra infrastructure.&lt;/p&gt;

&lt;p&gt;v2 will add Redis caching, async queuing, and webhook alerts — opt-in for people who have a VPS.&lt;/p&gt;

&lt;p&gt;v3 will add WebSockets and a live dashboard feed — opt-in for people who want realtime.&lt;/p&gt;

&lt;p&gt;The key decision: each stage is strictly opt-in. Shared hosting users will never be broken by what VPS users unlock. v1 will always work on v1 infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Some implementation details worth talking about
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Error grouping by fingerprint.&lt;/strong&gt; This is how Sentry works and it's the right approach. When an error comes in, I hash &lt;code&gt;project_id + error_type + message + file + line&lt;/code&gt; into a SHA-256 fingerprint. Same error fires 1000 times — 1 group, 1000 occurrences. If you mark an error as resolved and it fires again, it automatically reopens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="nv"&gt;$fingerprint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s2"&gt;"sha256"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nv"&gt;$projectId&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="nv"&gt;$errorType&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="nv"&gt;$message&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="nv"&gt;$file&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$line&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Public key vs secret key separation.&lt;/strong&gt; The JS snippet uses a public key — safe to expose in the browser, locked to a whitelist of allowed domains. Analytics queries and link management use a secret key — server-side only, never in frontend code. One install supports multiple projects, each with their own key pair.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SPA support in the snippet.&lt;/strong&gt; History-based routing doesn't trigger page loads, so &lt;code&gt;pushState&lt;/code&gt; and &lt;code&gt;replaceState&lt;/code&gt; get patched to fire pageviews on navigation. &lt;code&gt;popstate&lt;/code&gt; and &lt;code&gt;hashchange&lt;/code&gt; are also handled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pushState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;_push&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;arguments&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;onUrlChange&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bot filtering.&lt;/strong&gt; UA string matching is obvious, but real browsers always send &lt;code&gt;Accept-Language&lt;/code&gt; and &lt;code&gt;Accept&lt;/code&gt; headers. If those are missing, it's a bot or a script regardless of what the UA says.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the API looks like
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://yourdomain.com/api/analytics/visitors.php?range&lt;span class="o"&gt;=&lt;/span&gt;30d &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-Key: your_secret_key"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"success"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"unique_visitors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1842&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"total_pageviews"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5631&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"total_sessions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2109&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bounce_rate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;43.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"over_time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-01-28"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"pageviews"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;178&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"unique_visitors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;91&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-01-29"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"pageviews"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;204&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"unique_visitors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;113&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's an engine, not a dashboard. The data comes back as JSON — what you do with it is up to you. Build a dashboard, pipe it into Grafana, query it from your admin panel, whatever fits your stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;PHP 8.1+&lt;/li&gt;
&lt;li&gt;MySQL 8.0+ / MariaDB 10.4+&lt;/li&gt;
&lt;li&gt;MaxMind GeoLite2 (local &lt;code&gt;.mmdb&lt;/code&gt; file)&lt;/li&gt;
&lt;li&gt;Vanilla JS snippet, zero dependencies, ~3KB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No Node, no Docker, no Redis, no build step. Clone it, import the schema, fill in the env file, drop the snippet. That's the entire setup.&lt;/p&gt;




&lt;h2&gt;
  
  
  It's already in production
&lt;/h2&gt;

&lt;p&gt;We shipped it internally at the company the same day I built it. It's tracking real traffic right now. That's the other reason for the shared hosting constraint — it wasn't a hypothetical, it was the actual target environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Open source, MIT
&lt;/h2&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/OmDongaonkar03/Micrologs" rel="noopener noreferrer"&gt;github.com/OmDongaonkar03/Micrologs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're on shared hosting, running a privacy-first product, or just tired of paying for tools you could own — give it a try. Issues and PRs are open.&lt;/p&gt;

&lt;p&gt;v2 is next. Webhooks for error alerts, Redis caching as opt-in, async queuing. If you have thoughts on what matters most, open an issue.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>monitoring</category>
      <category>php</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
