<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: T.O.</title>
    <description>The latest articles on DEV Community by T.O. (@tim_o_5617baa5171354e).</description>
    <link>https://dev.to/tim_o_5617baa5171354e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tim_o_5617baa5171354e"/>
    <language>en</language>
    <item>
      <title>How I Built a Crowdsourced Luxury Watch Waitlist Tracker with Next.js and Supabase</title>
      <dc:creator>T.O.</dc:creator>
      <pubDate>Tue, 05 May 2026 02:47:55 +0000</pubDate>
      <link>https://dev.to/tim_o_5617baa5171354e/how-i-built-a-crowdsourced-luxury-watch-waitlist-tracker-with-nextjs-and-supabase-1dee</link>
      <guid>https://dev.to/tim_o_5617baa5171354e/how-i-built-a-crowdsourced-luxury-watch-waitlist-tracker-with-nextjs-and-supabase-1dee</guid>
      <description>&lt;p&gt;I got tired of sitting on a Rolex waitlist with zero information. No position, no ETA, no way to know if my wait was normal or if I was getting strung along. When I went looking for data, all I found was Reddit threads with hundreds of anecdotes buried in noise.&lt;/p&gt;

&lt;p&gt;So I built &lt;a href="https://www.unghosted.io" rel="noopener noreferrer"&gt;unghosted.io&lt;/a&gt;, a structured tracker where collectors submit their wait times anonymously. 550+ reports from 62 countries in the first week.&lt;/p&gt;

&lt;p&gt;Here's how I built it, what I learned, and what surprised me about the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Millions of people sit on authorized dealer (AD) waitlists for luxury watches from brands like Rolex, Patek Philippe, Audemars Piguet, and others. The system is deliberately opaque. There's no queue number, no ETA, no transparency. Dealers decide who gets what based on relationships, purchase history, location, and their own internal criteria.&lt;/p&gt;

&lt;p&gt;The experience varies wildly depending on where you are in the world. A collector in Dubai might walk into an AD and get a Submariner the same day, while someone in New York waits 6 months for the same watch. But there was no way to know that because the only "data" available was scattered across forum posts and Reddit threads. Someone would post "I waited 8 months for my Submariner" and that was it. No way to aggregate, filter, or compare across regions or purchase history levels.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;I went with a stack optimized for speed to ship and low ongoing cost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js 16&lt;/strong&gt; (App Router) - ISR for page caching so brand pages revalidate hourly instead of on every request&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supabase&lt;/strong&gt; (Postgres + Row Level Security) - handles the entries table, subscribers, and all the data layer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plotly.js&lt;/strong&gt; (plotly.js-basic-dist-min) - interactive scatter plots showing wait time vs purchase history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vercel&lt;/strong&gt; - hosting with automatic deployments from GitHub&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New Relic&lt;/strong&gt; - browser monitoring for Core Web Vitals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total monthly cost: under $30 (Vercel Pro + Supabase free tier + New Relic free tier).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Data Model
&lt;/h2&gt;

&lt;p&gt;The core &lt;code&gt;entries&lt;/code&gt; table is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;brand&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;family&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.,&lt;/span&gt; &lt;span class="nv"&gt;"Submariner"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;"Nautilus"&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="k"&gt;specific&lt;/span&gt; &lt;span class="n"&gt;reference&lt;/span&gt; &lt;span class="k"&gt;like&lt;/span&gt; &lt;span class="nv"&gt;"126610LN"&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;wait_time&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;bucketed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;"Walk-in / same day"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;"1-3 months"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;"1-2 years"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;geographic&lt;/span&gt; &lt;span class="k"&gt;location&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;AD&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;purchase_date&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;purchase_history&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;"No prior purchases"&lt;/span&gt; &lt;span class="n"&gt;through&lt;/span&gt; &lt;span class="nv"&gt;"6+ purchases / VIP"&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;pending&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;published&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;flagged&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;followup_frequency&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;how&lt;/span&gt; &lt;span class="n"&gt;often&lt;/span&gt; &lt;span class="n"&gt;they&lt;/span&gt; &lt;span class="n"&gt;follow&lt;/span&gt; &lt;span class="n"&gt;up&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;their&lt;/span&gt; &lt;span class="n"&gt;AD&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;long&lt;/span&gt; &lt;span class="n"&gt;waits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Region is a critical field. Wait times differ dramatically by location. The same watch can be a walk-in purchase in one city and a 2-year wait in another. Every report captures where the AD is located so collectors can compare their local market against others worldwide.&lt;/p&gt;

&lt;p&gt;New submissions default to "pending" for moderation. I built outlier detection that flags entries deviating 3+ tiers from the model average, plus duplicate detection that rejects identical brand/model/wait/region combos within 24 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture Decisions That Mattered
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ISR over force-dynamic.&lt;/strong&gt; Early on I had &lt;code&gt;force-dynamic&lt;/code&gt; on every page, which meant every page load hit Supabase. Switching to ISR with 1-hour revalidation on brand pages and 5-minute revalidation on the homepage cut server load dramatically while keeping data reasonably fresh.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bucketed wait times instead of freeform.&lt;/strong&gt; I initially considered letting users enter exact months. But people remember "about 6 months" not "5.7 months." Bucketed options (1-3 months, 3-6 months, etc.) give cleaner data with less friction in the submit flow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plotly over Chart.js or Recharts.&lt;/strong&gt; I needed interactive scatter plots where users could hover over individual data points and see the details. Plotly handles this natively. The trade-off is bundle size, which is why I use &lt;code&gt;plotly.js-basic-dist-min&lt;/code&gt; instead of the full package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RLS for security, service role for API writes.&lt;/strong&gt; Anonymous users can read published entries through the anon key. All writes go through an API route that uses the service role key after server-side validation. This prevents direct database manipulation while keeping the read path fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Data Shows
&lt;/h2&gt;

&lt;p&gt;Some things the data revealed that I didn't expect:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Submariners are way easier to get than people think.&lt;/strong&gt; The median wait is under 3 months, and 25% of reports are walk-ins. The internet narrative of "impossible to get a Sub" doesn't match the actual data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purchase history matters more than time.&lt;/strong&gt; The scatter plots clearly show that collectors with 2-3+ prior purchases get watches faster than first-time buyers waiting years. The system rewards relationship building over patience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Datejusts are practically available on demand.&lt;/strong&gt; Most reports show walk-in or under 1 month waits. ADs use Datejusts as relationship starters. They offer you one to get you in the door, then you work toward the sport models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FP Journe is in a league of its own.&lt;/strong&gt; With under 900 watches produced per year, the waitlist is essentially an application process. Most boutiques have stopped accepting new names for the Chronometre Bleu entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Location matters more than most people realize.&lt;/strong&gt; Europe and Asia report different patterns than the US, especially for Tudor and Vacheron Constantin. Some markets have significantly shorter waits for models that are considered "impossible" in other regions. A collector in Singapore might get a Royal Oak in 3 months while someone in London waits over a year. The data makes these regional differences visible for the first time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Distribution Strategy
&lt;/h2&gt;

&lt;p&gt;Building the product was the easy part. Getting people to submit data was the hard part.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit was the primary channel.&lt;/strong&gt; I posted to r/rolex first with the angle "I built a structured version of the AD Wait Time Megathread." That post hit 18K views, 44 upvotes, and 30 comments. The key was framing it as a community tool, not a product launch. No self-promotion, just "here's a thing that solves a problem we all have."&lt;/p&gt;

&lt;p&gt;The second post to r/Watches (3.3M members) used a multi-brand data angle and generated 10+ organic submissions overnight from collectors in the US, Canada, Europe, and Asia.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SEO was the long game.&lt;/strong&gt; I built 65 pages targeting specific search queries: brand pillar pages (rolex-waitlist-times, patek-philippe-waitlist), model pages (rolex-submariner-wait-time), and reference pages (/rolex/126610LN). Each page has JSON-LD schema (Article + FAQPage), canonical tags, and FAQ questions matching Google's "People Also Ask" phrasing.&lt;/p&gt;

&lt;p&gt;Within 5 days, I was ranking page 1 for several brand-specific waitlist queries (AP at position 4, Tudor at position 5, Patek at position 6).&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistakes I Made
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Claiming "500+ reports" in the title before I had 500 Rolex-specific reports.&lt;/strong&gt; A reader called me out. Trust is everything in this niche. I changed it to match reality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not having a Content Security Policy that included Google Analytics.&lt;/strong&gt; When I added GA4, my own CSP blocked the script. Users saw broken analytics and I missed traffic data for a day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Forgetting to update RLS policies after adding a new column.&lt;/strong&gt; I added a &lt;code&gt;followup_frequency&lt;/code&gt; column to the database and updated the API route, but the Supabase insert failed because the API was using the anon key. Switching to the service role key for writes fixed it. A real user reported the bug in my Reddit thread, which was both embarrassing and fortunate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1,000 reports by mid-May.&lt;/strong&gt; The dataset is the moat. Everything else is secondary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Geographic filtering on scatter plots.&lt;/strong&gt; Users are asking for it in comments. Being able to filter by region will let collectors compare their local market against the global average.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monthly "State of the Waitlist" report.&lt;/strong&gt; Publishable data that journalists and bloggers can cite and link to.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Takeaway for Builders
&lt;/h2&gt;

&lt;p&gt;If you're building a data product:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pick a niche where information asymmetry exists.&lt;/strong&gt; Luxury watch waitlists are deliberately opaque. That opacity is the opportunity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Let the community own the data.&lt;/strong&gt; I don't scrape. People voluntarily submit because the tool helps them. That makes the data defensible and the growth organic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ship the simplest version that proves the concept.&lt;/strong&gt; My MVP was a submit form, a Supabase table, and a scatter plot. Everything else came after I had 100+ reports.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution is the hard part.&lt;/strong&gt; Reddit worked because I was a genuine member of the community solving a real problem. If I'd posted a "check out my app" link, it would have been removed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The site is free, open, and live at &lt;a href="https://www.unghosted.io" rel="noopener noreferrer"&gt;unghosted.io&lt;/a&gt;. If you're a watch collector sitting on a waitlist, submit your data. If you're a builder, I'd love to hear your feedback on the approach.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Stack: Next.js 16, Supabase, Plotly.js, Vercel, New Relic. Solo build. &lt;a href="https://github.com/SafePassGenerator/unghosted" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>showdev</category>
      <category>sideprojects</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Stop Treating Credential Generation as an Auditor Scramble</title>
      <dc:creator>T.O.</dc:creator>
      <pubDate>Tue, 14 Apr 2026 13:52:20 +0000</pubDate>
      <link>https://dev.to/tim_o_5617baa5171354e/stop-treating-credential-generation-as-an-auditor-scramble-3742</link>
      <guid>https://dev.to/tim_o_5617baa5171354e/stop-treating-credential-generation-as-an-auditor-scramble-3742</guid>
      <description>&lt;p&gt;&lt;em&gt;How to bake compliance evidence into the process before your next SOC2 or HIPAA audit.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The pattern shows up the same way every time. Audit prep starts and someone asks for proof that credentials were generated securely. Engineers scramble to pull Git history, check deployment logs, and screenshot password manager settings. They end up writing a three paragraph explanation of what they think their generation process does.&lt;/p&gt;

&lt;p&gt;The auditor reviews it and says the evidence is insufficient.&lt;/p&gt;

&lt;p&gt;Manual reconstruction always turns into pain. Not sometimes. Every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why manual reconstruction fails auditors
&lt;/h2&gt;

&lt;p&gt;Auditors are not asking whether you have a good process today. They are asking whether you had a documented, consistent, repeatable process at the time each credential was generated. Those are two completely different questions.&lt;/p&gt;

&lt;p&gt;You can have an excellent process right now and still fail an audit because you cannot prove what was true six months ago. The entropy level, the compliance policy, the timestamp, the system that generated it. If that information was not captured at creation time it effectively does not exist. Retroactive reconstruction is not evidence. It is a story.&lt;/p&gt;

&lt;h2&gt;
  
  
  What baking evidence into the process actually means
&lt;/h2&gt;

&lt;p&gt;The scalable answer is not better documentation or more screenshots. It is making evidence generation automatic and inseparable from the credential generation itself.&lt;/p&gt;

&lt;p&gt;Every time a credential is created these five things should be captured automatically in the same operation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The generation method.&lt;/strong&gt; Not just the library name. The specific cryptographic function, the entropy source, and the parameters applied.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The compliance policy.&lt;/strong&gt; Which standard was active at the moment of creation. NIST 800-63B, SOC2, HIPAA, or your internal standard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The entropy bits.&lt;/strong&gt; The mathematical proof of randomness. This is the specific number auditors increasingly demand and most teams cannot produce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The timestamp.&lt;/strong&gt; Exact ISO format with timezone. Not an approximation reconstructed from logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The environment and caller.&lt;/strong&gt; Which system requested the credential and in which environment.&lt;/p&gt;

&lt;p&gt;When these are captured automatically in every generation event you move from telling stories to providing compliance artifacts. The auditor asks for proof. You pull the log. Finding closed.&lt;/p&gt;

&lt;p&gt;Here is what that structured log event looks like in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"event"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"credential.generated"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"generated_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-04-13T14:32:01.847Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"compliance_profile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SOC2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"entropy_bits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;116&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"length"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"crypto.randomInt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"environment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"production"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"caller"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user-provisioning-service"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"calls_remaining"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49847&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every field exists at creation time. Nothing reconstructed. Nothing approximated.&lt;/p&gt;

&lt;h2&gt;
  
  
  The written standard most teams skip
&lt;/h2&gt;

&lt;p&gt;Before you can automate evidence you need a written standard. Most teams skip this because it feels like paperwork. It is not paperwork. It is the specification your automation implements.&lt;/p&gt;

&lt;p&gt;Your standard should answer these questions in writing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum credential length per environment&lt;/li&gt;
&lt;li&gt;Entropy threshold that constitutes compliance with your applicable standard&lt;/li&gt;
&lt;li&gt;Which compliance profile applies to which system type&lt;/li&gt;
&lt;li&gt;Log retention period and access controls&lt;/li&gt;
&lt;li&gt;Who can query the audit log and under what conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without written standards your automation is generating evidence for a policy that does not exist on paper. Auditors will note that gap separately.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shift that changes everything
&lt;/h2&gt;

&lt;p&gt;The teams that handle audits well are not necessarily the ones with the best security. They are the ones whose security controls speak auditor language automatically.&lt;/p&gt;

&lt;p&gt;A credential generated with &lt;code&gt;crypto.randomInt&lt;/code&gt; is objectively more secure than one generated with &lt;code&gt;Math.random&lt;/code&gt;. But if neither produces a compliance artifact at creation time they look identical to an auditor. Zero evidence is zero evidence regardless of which function generated the credential.&lt;/p&gt;

&lt;p&gt;The goal is not just generating secure credentials. The goal is generating secure credentials that prove their own integrity the moment they are created. That proof has to exist at creation time. Not when your auditor asks for it.&lt;/p&gt;




</description>
      <category>security</category>
      <category>devops</category>
      <category>privacy</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Why Math.random() Will Fail Your Next Security Audit</title>
      <dc:creator>T.O.</dc:creator>
      <pubDate>Fri, 10 Apr 2026 16:38:41 +0000</pubDate>
      <link>https://dev.to/tim_o_5617baa5171354e/why-mathrandom-will-fail-your-next-security-audit-4h2c</link>
      <guid>https://dev.to/tim_o_5617baa5171354e/why-mathrandom-will-fail-your-next-security-audit-4h2c</guid>
      <description>&lt;p&gt;If you have ever written something like this in a production codebase:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const secret = Math.random().toString(36).slice(2);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You have shipped a credential that will fail a security audit.&lt;br&gt;
Not might fail. Will fail.&lt;br&gt;
Here is why that matters and what to do about it before your auditors find it first.&lt;br&gt;
With the renewed focus on Executive Order 14028 and software supply chain security, auditors are no longer just looking at what libraries you use. They are looking at how you use them to generate internal secrets. Credential generation is now explicitly in scope for supply chain security reviews in ways it was not three years ago.&lt;br&gt;
The problem with Math.random()&lt;br&gt;
Math.random() is not a cryptographic function. It was never designed to be. It generates pseudorandom numbers that are fast and statistically distributed enough for things like shuffling a playlist or generating a random color. It is completely wrong for generating secrets, API keys, passwords, or any credential that needs to be unpredictable to an adversary.&lt;br&gt;
The specific problem is predictability. Math.random() uses a deterministic algorithm seeded by the JavaScript engine. In some environments that seed is observable or reconstructible. In all environments the output fails the entropy requirements defined in NIST 800-63B, the federal standard for digital identity and credential strength.&lt;br&gt;
When an auditor runs entropy analysis on your generated credentials and finds Math.random() in the generation path, the finding looks like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AUDIT FINDING — CRITICAL&lt;br&gt;
Control: SC-28 / NIST 800-53&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finding: Credential generation function Math.random()&lt;br&gt;
identified across microservices.&lt;/p&gt;

&lt;p&gt;Impact: Generated secrets fail entropy requirements&lt;br&gt;
for NIST 800-63B compliance. Cryptographic randomness&lt;br&gt;
cannot be verified.&lt;/p&gt;

&lt;p&gt;Status: OPEN - 90 day remediation required&lt;/p&gt;

&lt;p&gt;That finding stops deals, delays launches, and costs engineering time to remediate retroactively. The fix at that point is expensive because you have to find every place Math.random() was used, replace it, rotate the affected credentials, and produce documentation proving the new generation method is compliant.&lt;br&gt;
The right function is already in Node.js&lt;br&gt;
You do not need a library. You do not need a third party service. Node.js ships with a cryptographically secure random number generator in the built-in crypto module:&lt;/p&gt;

&lt;p&gt;`const { randomInt, randomBytes } = require('crypto');&lt;/p&gt;

&lt;p&gt;// Generate a random integer between 0 and charset.length&lt;br&gt;
const index = randomInt(0, charset.length);&lt;/p&gt;

&lt;p&gt;// Generate random bytes for a hex token&lt;br&gt;
const token = randomBytes(32).toString('hex');`&lt;/p&gt;

&lt;p&gt;crypto.randomInt() uses the operating system's cryptographically secure pseudorandom number generator. On Linux that is /dev/urandom. The output passes NIST entropy requirements because the source of randomness is designed for exactly this purpose.&lt;br&gt;
The fix is a one line change in most cases. The problem is that most teams never make it because nobody flags it until an auditor does.&lt;br&gt;
A copy-paste utility for your codebase&lt;/p&gt;

&lt;p&gt;If you want a drop-in replacement you can put in your utils/ folder today, here is a minimal version using only Node.js built-ins:&lt;/p&gt;

&lt;p&gt;`const { randomInt } = require('crypto');&lt;/p&gt;

&lt;p&gt;const CHARSETS = {&lt;br&gt;
  uppercase: 'ABCDEFGHJKLMNPQRSTUVWXYZ',&lt;br&gt;
  lowercase: 'abcdefghjkmnpqrstuvwxyz',&lt;br&gt;
  numbers: '23456789',&lt;br&gt;
  symbols: '!@#$%^&amp;amp;*'&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;function generateSecureCredential(length = 20, options = {}) {&lt;br&gt;
  const {&lt;br&gt;
    uppercase = true,&lt;br&gt;
    lowercase = true,&lt;br&gt;
    numbers = true,&lt;br&gt;
    symbols = false&lt;br&gt;
  } = options;&lt;/p&gt;

&lt;p&gt;let charset = '';&lt;br&gt;
  if (uppercase) charset += CHARSETS.uppercase;&lt;br&gt;
  if (lowercase) charset += CHARSETS.lowercase;&lt;br&gt;
  if (numbers) charset += CHARSETS.numbers;&lt;br&gt;
  if (symbols) charset += CHARSETS.symbols;&lt;/p&gt;

&lt;p&gt;if (!charset) throw new Error('At least one character set required');&lt;/p&gt;

&lt;p&gt;return Array.from(&lt;br&gt;
    { length },&lt;br&gt;
    () =&amp;gt; charset[randomInt(0, charset.length)]&lt;br&gt;
  ).join('');&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;// Usage&lt;br&gt;
const secret = generateSecureCredential(20, {&lt;br&gt;
  uppercase: true,&lt;br&gt;
  lowercase: true,&lt;br&gt;
  numbers: true,&lt;br&gt;
  symbols: true&lt;br&gt;
});`&lt;/p&gt;

&lt;p&gt;This gives you cryptographic security but not compliance documentation. For NIST 800-63B audit trails, entropy calculation, and machine-readable compliance metadata, that is what you need to add on top of this foundation.&lt;br&gt;
The documentation gap nobody talks about&lt;br&gt;
Switching from Math.random() to crypto.randomInt() is the technical fix. But it does not solve the audit problem completely.&lt;br&gt;
Auditors do not just want secure credentials. They want documented proof that credentials are secure. That means:&lt;/p&gt;

&lt;p&gt;Entropy bits calculated per credential&lt;br&gt;
Generation method documented at the time of creation&lt;br&gt;
Compliance profile recorded alongside the credential&lt;/p&gt;

&lt;p&gt;Most internal credential generation systems do not produce this documentation automatically. Engineers have to build it separately, maintain it, and make sure it stays accurate as the codebase evolves. Most teams never complete that work.&lt;br&gt;
This is the gap that causes audit findings even when the underlying generation is technically correct. You fixed Math.random() but you have no proof you fixed it that satisfies an auditor asking for evidence.&lt;br&gt;
What shift-left credential security looks like&lt;br&gt;
The concept of shifting security left means solving security problems at the earliest possible point in the development workflow, not after auditors find them.&lt;br&gt;
For credential generation that means every credential that ships should come with documented proof of how it was generated, what entropy it has, and which compliance standard it meets. That documentation should be automatic, not something an engineer produces manually six months later when someone asks for it.&lt;br&gt;
If you want to automate that documentation piece entirely, this is how we structured the Six Sense API to handle it:&lt;/p&gt;

&lt;p&gt;`const response = await fetch('&lt;a href="https://api.sixsensesolutions.net/v1/generate" rel="noopener noreferrer"&gt;https://api.sixsensesolutions.net/v1/generate&lt;/a&gt;', {&lt;br&gt;
  method: 'POST',&lt;br&gt;
  headers: {&lt;br&gt;
    'Authorization': 'Bearer your_api_key',&lt;br&gt;
    'Content-Type': 'application/json'&lt;br&gt;
  },&lt;br&gt;
  body: JSON.stringify({&lt;br&gt;
    length: 20,&lt;br&gt;
    quantity: 1,&lt;br&gt;
    compliance: 'NIST',&lt;br&gt;
    options: {&lt;br&gt;
      uppercase: true,&lt;br&gt;
      lowercase: true,&lt;br&gt;
      numbers: true,&lt;br&gt;
      symbols: true,&lt;br&gt;
      exclude_ambiguous: true&lt;br&gt;
    }&lt;br&gt;
  })&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;const { passwords, meta } = await response.json();&lt;/p&gt;

&lt;p&gt;console.log(meta);&lt;br&gt;
// {&lt;br&gt;
//   length: 20,&lt;br&gt;
//   entropy_bits: 120.4,  // &amp;lt;-- This is your audit evidence. 120+ bits exceeds NIST minimum.&lt;br&gt;
//   generated_at: "2026-04-10T14:57:35Z",&lt;br&gt;
//   compliance_profile: "NIST",&lt;br&gt;
//   calls_remaining: 49999&lt;br&gt;
// }`&lt;/p&gt;

&lt;p&gt;Every response includes entropy_bits and compliance_profile in the metadata. That metadata is the audit documentation. Your auditor asks for proof that credentials meet NIST 800-63B. You show them the response metadata. The finding closes.&lt;br&gt;
The free tier&lt;br&gt;
If you want to test this in your own codebase, there is a free tier at sixsensesolutions.net with 500 calls per month and no credit card required. Signup generates a real API key instantly.&lt;br&gt;
The NIST and SOC2 compliance profiles enforce minimum lengths, character requirements, and ambiguous character exclusion automatically. The strong profile respects whatever options you pass directly.&lt;br&gt;
The takeaway&lt;br&gt;
If your codebase contains Math.random() in any credential generation path, you have a finding waiting to happen. The technical fix is one line and the copy-paste utility above gives you that today for free. The documentation fix is what most teams skip and what auditors actually ask for.&lt;br&gt;
Both need to be in place before your next audit, not after.&lt;/p&gt;

</description>
      <category>security</category>
      <category>javascript</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
