<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ujjawal Tyagi</title>
    <description>The latest articles on DEV Community by Ujjawal Tyagi (@ujjawal_tyagi_c5a84255da4).</description>
    <link>https://dev.to/ujjawal_tyagi_c5a84255da4</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ujjawal_tyagi_c5a84255da4"/>
    <language>en</language>
    <item>
      <title>Postgres at Scale: Lessons from Running 30+ D2C Platforms on RDS</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Wed, 06 May 2026 14:50:29 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/postgres-at-scale-lessons-from-running-30-d2c-platforms-on-rds-1c7b</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/postgres-at-scale-lessons-from-running-30-d2c-platforms-on-rds-1c7b</guid>
      <description>&lt;p&gt;PostgreSQL is the database we reach for first at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt;. We've shipped 30+ platforms on it across D2C dairy (Veda Milk), service marketplaces (Cremaster, Housecare), insurance survey workflows (ClaimsMitra), legal-tech (Legal Owl), and more. None of those projects ran into a wall where Postgres couldn't keep up. But several of them ran into walls where we couldn't keep up with Postgres — not knowing how to use the indexes, the connection pool, the query planner, or the vacuum settings. Here are the lessons that landed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 1: connection pooling is non-optional
&lt;/h2&gt;

&lt;p&gt;The default Postgres &lt;code&gt;max_connections&lt;/code&gt; on RDS is 100 (sometimes scales with instance size). A Node.js app server with 4 workers easily opens 4 connections per process; deploy 8 servers and you've eaten a third of your pool.&lt;/p&gt;

&lt;p&gt;Fix: PgBouncer in transaction-pooling mode in front of every cluster. Each app server now holds a small pool of cheap PgBouncer connections, and PgBouncer multiplexes them onto a much smaller pool of real Postgres connections. We run with 200–1000 client connections and 20–40 actual Postgres connections.&lt;/p&gt;

&lt;p&gt;Note: transaction pooling means you can't use session-level features like &lt;code&gt;SET LOCAL&lt;/code&gt; in a way that survives across statements, prepared statements get tricky, and &lt;code&gt;LISTEN/NOTIFY&lt;/code&gt; doesn't work well. Plan around this from day one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 2: indexes that don't get used are a tax
&lt;/h2&gt;

&lt;p&gt;Every index speeds up reads and slows down writes. We've found unused indexes that were costing us 15% on write throughput and zero on read latency. Find them and drop them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;schemaname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indexrelname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;idx_scan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pg_size_pretty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pg_relation_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;indexrelid&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;size&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_stat_user_indexes&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;idx_scan&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;indexrelname&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;LIKE&lt;/span&gt; &lt;span class="s1"&gt;'pg_toast%'&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;pg_relation_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;indexrelid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it monthly. Drop anything that's been zero scans for 90+ days and isn't unique-constraint enforcement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 3: partial indexes for the hot path
&lt;/h2&gt;

&lt;p&gt;Our Veda Milk subscription engine queries "all subscriptions where the next delivery is tomorrow." Indexing every subscription on &lt;code&gt;next_delivery_date&lt;/code&gt; works but is wasteful — 95% of subscriptions don't have tomorrow as their next delivery.&lt;/p&gt;

&lt;p&gt;A partial index, only on rows that match the hot predicate, gets us a 10x smaller index:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;subs_due_tomorrow_idx&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;subscriptions&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;next_delivery_date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'active'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;next_delivery_date&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="k"&gt;CURRENT_DATE&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The planner picks this automatically when our nightly job queries with the matching predicate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 4: vacuum tuning is where you find dragons
&lt;/h2&gt;

&lt;p&gt;Default autovacuum settings are fine for normal tables. They're terrible for high-churn tables like wallet ledgers and order tables in subscription commerce.&lt;/p&gt;

&lt;p&gt;For those, we tune per-table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;wallet_ledger&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;autovacuum_vacuum_scale_factor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="n"&gt;autovacuum_analyze_scale_factor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default is 0.2 (vacuum when 20% of rows are dead). For tables with a million writes a day, 0.2 means waiting 200k dead tuples before vacuum. Bloat builds. Indexes degrade. Reads slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 5: read replicas for analytics, never for primary reads
&lt;/h2&gt;

&lt;p&gt;We used to route some "non-critical" reads to a read replica. It bit us when replica lag spiked during a heavy write burst and customers saw stale balances.&lt;/p&gt;

&lt;p&gt;Now the rule: read replicas are for offline-style analytics queries (BI dashboards, reports, ML feature pipelines). Customer-facing reads always come from the primary, with caching in Redis or the application layer if latency matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 6: JSONB is not a free lunch
&lt;/h2&gt;

&lt;p&gt;We've seen teams treat JSONB like a schemaless escape hatch. "We don't know the shape yet, let's just store JSON." 18 months later, every query has 4 JSONB extractions and 2 GIN indexes that are 5 GB each.&lt;/p&gt;

&lt;p&gt;Use JSONB for genuinely sparse, nested, or polymorphic data — audit logs, event payloads, third-party API responses. For business entities with predictable schemas, use real columns. The future you will thank present you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 7: backups are easy, restores are hard
&lt;/h2&gt;

&lt;p&gt;RDS automated backups are great until you need to restore one. We test restores quarterly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spin up a new cluster from the latest snapshot&lt;/li&gt;
&lt;li&gt;Connect a non-prod copy of the app&lt;/li&gt;
&lt;li&gt;Run the smoke-test suite&lt;/li&gt;
&lt;li&gt;Time how long the entire process took&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first time we did this, restore took 4 hours and the smoke tests revealed two missing migrations in the snapshot. Now restore takes 45 min and we know the process works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 8: migrations need a rollout playbook, not just a tool
&lt;/h2&gt;

&lt;p&gt;Long-running migrations on tables with millions of rows can lock the table for minutes. We use a playbook:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schema-only changes (CREATE INDEX CONCURRENTLY, ADD COLUMN with default null): safe at any time&lt;/li&gt;
&lt;li&gt;Data backfills: run in batches via a one-off worker, monitor lag, never block the primary connection pool&lt;/li&gt;
&lt;li&gt;Type changes: use a multi-step pattern — add new column, dual-write from app, backfill, switch reads, drop old column over multiple deploys&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Never use a migration tool's "apply" button on a table that has more than 1M rows without checking what it actually does first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 9: query_text is your friend
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;pg_stat_statements&lt;/code&gt; is enabled on every cluster we operate. The queries that show up at the top of total_time after a week of production usage are exactly the queries that need indexing, batching, or rewriting. Read pg_stat_statements weekly, you'll out-pace anyone debugging in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack we ship with PostgreSQL
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js (with pgbouncer in front)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Migrations:&lt;/strong&gt; node-pg-migrate or knex with peer-reviewed migration scripts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection pool:&lt;/strong&gt; pgbouncer (transaction mode), node-postgres pool per service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backups:&lt;/strong&gt; RDS automated + monthly logical dumps to S3&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt; CloudWatch + pg_stat_statements dashboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replicas:&lt;/strong&gt; RDS read replica for analytics workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a product where Postgres has to scale?
&lt;/h2&gt;

&lt;p&gt;We've shipped 30+ Postgres-backed products without hitting a wall. Most teams don't because they don't know what to look for. If you're building one, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has the playbook for setup, scaling, and surviving the next round of growth. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>performance</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Build vs Buy: Authentication for Indian D2C Apps</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Wed, 06 May 2026 14:47:57 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/build-vs-buy-authentication-for-indian-d2c-apps-25ha</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/build-vs-buy-authentication-for-indian-d2c-apps-25ha</guid>
      <description>&lt;p&gt;Auth0 costs ~$0.023 per active user per month at the volumes most Indian D2C startups operate at. For a brand at 100,000 MAU that's ~$2,300/month. For a 1M MAU app, it's ~$23,000/month. Which is fine if your gross margin per user is $5+. It's not fine if your average customer spends ₹300 a month on milk subscriptions.&lt;/p&gt;

&lt;p&gt;This is the calculation that drives most of our &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; backend decisions. Build the auth system, save the recurring fee. Buy the auth system, save 4 weeks of engineering. The right answer changes by company, and we've made both calls. Here's the framework we use.&lt;/p&gt;

&lt;h2&gt;
  
  
  What auth actually has to do for an Indian D2C app
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Phone-OTP login (the default for India, not email/password)&lt;/li&gt;
&lt;li&gt;Email/password as a fallback for older / NRI users&lt;/li&gt;
&lt;li&gt;Social login (Google primarily; Apple for iOS users with iCloud accounts)&lt;/li&gt;
&lt;li&gt;Magic link via WhatsApp or email&lt;/li&gt;
&lt;li&gt;Session management with refresh tokens&lt;/li&gt;
&lt;li&gt;Multi-device login + remote logout&lt;/li&gt;
&lt;li&gt;Account recovery via the original phone OR email&lt;/li&gt;
&lt;li&gt;Rate limiting on OTP requests (a single phone shouldn't be OTP-bombed by attackers)&lt;/li&gt;
&lt;li&gt;DPDP Act compliance (consent storage, data retention, deletion requests)&lt;/li&gt;
&lt;li&gt;Audit logging for compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your auth provider doesn't natively support all of those, you're paying their monthly fee AND building chunks of auth on top. Worst of both worlds.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to buy (Auth0, Clerk, Supabase Auth)
&lt;/h2&gt;

&lt;p&gt;Buy when one of these is true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're under 50,000 MAU and time-to-launch beats running cost&lt;/li&gt;
&lt;li&gt;Your team has zero security engineering experience and you need someone else to take responsibility for the basics&lt;/li&gt;
&lt;li&gt;You need SSO/SAML for B2B customers (rolling SAML yourself is an unforced error)&lt;/li&gt;
&lt;li&gt;You're integrating with HRIS, IDPs, or other enterprise identity systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The real reason to buy isn't "auth is hard." It's that &lt;em&gt;good&lt;/em&gt; auth is hard — password-spray rate-limiting, breach-list checks, device fingerprinting, ATO detection. A modest team will not match what Auth0 or Clerk does on those.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to build
&lt;/h2&gt;

&lt;p&gt;Build when one of these is true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your unit economics can't absorb the per-MAU fee at your projected scale&lt;/li&gt;
&lt;li&gt;You need OTP delivery via Indian SMS gateways (Auth0's global pricing models are wrong for Indian SMS volumes)&lt;/li&gt;
&lt;li&gt;You need WhatsApp OTP, which is often cheaper than SMS in India and which most non-Indian providers don't natively support&lt;/li&gt;
&lt;li&gt;You need consent flows specific to DPDP Act / state regulations&lt;/li&gt;
&lt;li&gt;You're operating at 1M+ MAU and your fee is a six-figure-per-month line item&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What "build" actually means
&lt;/h2&gt;

&lt;p&gt;It does not mean writing JWT signing from scratch. It means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a battle-tested library for password hashing (Argon2 via your language's standard wrapper)&lt;/li&gt;
&lt;li&gt;Use jsonwebtoken or your stack's standard JWT library for token issuance&lt;/li&gt;
&lt;li&gt;Use a vetted OTP provider (MSG91, AWS SNS, or WhatsApp Cloud API for Indian volumes)&lt;/li&gt;
&lt;li&gt;Build the orchestration: registration flow, login flow, OTP issuance/verification, session management, refresh tokens, password reset, social login OAuth dance&lt;/li&gt;
&lt;li&gt;Build the security perimeter: rate limits, lockouts, account-takeover detection, breach-list checks (haveibeenpwned k-anonymity API)&lt;/li&gt;
&lt;li&gt;Build the compliance layer: consent timestamps, deletion workflows, audit logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a senior engineer this is 4–6 weeks of focused work. For a junior team it's 12+ weeks and you'll miss things. The rule we use: if your bench has a senior engineer who has previously shipped auth in production, build. If not, buy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture we keep using when we build
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client → API Gateway → auth-service
                          ↓
                  ┌───────┼─────────┐
                  ↓                  ↓
             OTP provider     PostgreSQL (users, sessions, audit_log)
                  ↓                  ↓
            Redis (rate limit + OTP TTL)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Stateless JWT access tokens (15-min TTL)&lt;/li&gt;
&lt;li&gt;Stateful refresh tokens (90-day TTL, stored hashed in PostgreSQL with a one-time rotation rule)&lt;/li&gt;
&lt;li&gt;Per-phone OTP rate limit (max 5/hour, exponential backoff thereafter)&lt;/li&gt;
&lt;li&gt;Breach-list check on every password set/change (haveibeenpwned k-anonymity)&lt;/li&gt;
&lt;li&gt;Argon2id for password hashing with reasonable cost parameters&lt;/li&gt;
&lt;li&gt;Audit log entry for every auth event (login, logout, password change, MFA enroll, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common mistakes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storing OTPs in plaintext.&lt;/strong&gt; Hash them like passwords with a short TTL.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generating tokens without rotation.&lt;/strong&gt; A leaked refresh token should self-revoke when used twice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forgetting to invalidate sessions on password change.&lt;/strong&gt; Otherwise an attacker who held a session continues to hold it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using v4 UUIDs as session IDs.&lt;/strong&gt; They're not random enough for some threat models. Use 128 bits of crypto-random base64.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Letting phone numbers be the only identifier.&lt;/strong&gt; Phone numbers get recycled in India. Bind sessions to a stable user_id, not the phone.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building auth (or any other piece of D2C infrastructure)?
&lt;/h2&gt;

&lt;p&gt;When the unit economics force you to build vs buy your auth, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has shipped both kinds. We've integrated Auth0, Clerk, Supabase Auth, and built custom phone-OTP systems for Indian-volume apps. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
      <category>security</category>
      <category>startup</category>
    </item>
    <item>
      <title>Building an AI Tutor for Rural India: What Works at 2G Speed</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Wed, 06 May 2026 14:31:07 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/building-an-ai-tutor-for-rural-india-what-works-at-2g-speed-16d5</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/building-an-ai-tutor-for-rural-india-what-works-at-2g-speed-16d5</guid>
      <description>&lt;p&gt;Most coverage of "AI for India" treats the subject the way Silicon Valley treats emerging markets — translate the product, localize the UI, and you're done. Six months of production deployment of &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;7S Samiti&lt;/a&gt;, our adaptive AI tutor for rural Indian students, taught us that this approach gets you maybe 5% of the way to a working product.&lt;/p&gt;

&lt;p&gt;The other 95% is engineering for the actual constraints of rural India: a ₹1,500 phone, 32 GB of total storage shared with WhatsApp and the camera, 2G most of the day with bursts of 4G when the family travels to the nearest town, and a primary user who is either bilingual in Hindi-English Roman script or wants to interact entirely in voice.&lt;/p&gt;

&lt;p&gt;Here's what we learned at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraint 1: storage is the gatekeeper
&lt;/h2&gt;

&lt;p&gt;The phones our users own do not have room to download a 600 MB app. They barely have room for our 80 MB app. We hit storage limits constantly.&lt;/p&gt;

&lt;p&gt;Fix: split the app into a tiny installer (~20 MB) plus on-demand content packs that the user can opt into per subject. When a student finishes Class 8 Mathematics, the next time they have Wi-Fi at school, the Class 8 Science pack downloads. When they finish Science, Math is auto-evicted.&lt;/p&gt;

&lt;p&gt;This is uncomfortable engineering. You have to track which content is on which device, which device has been seen recently, and which packs the student is most likely to need next. We log usage telemetry (locally first, synced when possible) to drive eviction policy intelligently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraint 2: 2G changes everything
&lt;/h2&gt;

&lt;p&gt;A 2G connection is ~50 KB/s on a good day. A 1 MB image takes 20 seconds. A 5 MB video takes 100 seconds.&lt;/p&gt;

&lt;p&gt;We stopped using images for anything that could be expressed in HTML/CSS. Math equations: KaTeX, not screenshots. Diagrams: SVG, not raster. Animations: pre-rendered Lottie JSON files (smaller than GIFs), often under 50 KB.&lt;/p&gt;

&lt;p&gt;Videos for lessons are streamed at 240p with adaptive bitrate. Each lesson has a "text-only" fallback the student can toggle on a slow day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraint 3: voice over text
&lt;/h2&gt;

&lt;p&gt;Our primary users are 11- to 14-year-olds who are still building literacy. Typing English on a touchscreen is slow. Typing Hindi via transliteration is even slower. Voice is faster, more natural, and more accessible.&lt;/p&gt;

&lt;p&gt;We use on-device speech-to-text where possible (Android's offline STT works surprisingly well for Hindi-English code-switching), with a server fallback when local fails. The student speaks their question, the AI tutor responds with both audio and text. The text is there for re-reading; the audio is there for first comprehension.&lt;/p&gt;

&lt;p&gt;Voice has a free side-effect: it's the only way the app works for partially-literate users. We didn't plan for that audience initially. They became 12% of monthly active users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraint 4: the LLM in the middle
&lt;/h2&gt;

&lt;p&gt;The AI tutor generates personalized quizzes, explanations, and study notes from the student's question. Standard LLM territory. The complications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Latency.&lt;/strong&gt; A 4-second LLM response feels instant on Wi-Fi and unbearable on 2G. We stream the response token-by-token, even on 2G. The student sees the first words within ~1 second; the rest fills in as the network allows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost.&lt;/strong&gt; A 7B-class self-hosted model handles 60% of queries; we route only the hard ones to a frontier model. Per-user daily token budget capped at the level a self-supporting student would tolerate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Curriculum alignment.&lt;/strong&gt; The tutor must stay aligned with NCERT (or equivalent state board) curriculum. We retrieval-augment every prompt with the relevant chapter context from a vector store of textbook content the student has selected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hallucination is a child-safety issue.&lt;/strong&gt; A wrong math answer is bad. A wrong history fact is worse if a child memorizes it. We never let the LLM answer factual questions without retrieved context, and we surface a "I'm not sure" UI when confidence is low.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Constraint 5: offline is the default
&lt;/h2&gt;

&lt;p&gt;The app must work entirely offline for the first 3 days. Otherwise, families with limited mobile data won't enroll their kids.&lt;/p&gt;

&lt;p&gt;When the student installs the app, the installer downloads the first 50 lessons of the chosen subject and the on-device classifier model. From there, the AI tutor can generate quizzes, score them, and explain answers entirely on-device using a small distilled model.&lt;/p&gt;

&lt;p&gt;The more capable LLM kicks in when the network is available. The student doesn't notice the boundary; lessons feel continuous regardless.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraint 6: trust is built in person
&lt;/h2&gt;

&lt;p&gt;You cannot acquire users in rural India through Instagram ads. The trust gap is too wide. We work with local school teachers, NGOs, and panchayat-level community members. Our "onboarding" is a 30-minute session at the school where a teacher walks 10 students through their first lesson together.&lt;/p&gt;

&lt;p&gt;We ship features for those teachers: a teacher dashboard (works on a basic Android), bulk-enroll flows, classroom-mode that mirrors a student's screen so the teacher can help with a stuck question. These features have nothing to do with AI; they're 100% of why the AI works in the field.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mobile:&lt;/strong&gt; Flutter (offline-first, ~80 MB base + on-demand content packs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web:&lt;/strong&gt; Next.js (teacher and admin dashboards)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js + PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Layer:&lt;/strong&gt; mix of on-device distilled model + self-hosted 7B for routine + frontier model for hard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speech-to-text:&lt;/strong&gt; Android offline STT primary, server fallback&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content:&lt;/strong&gt; SVGs, KaTeX, Lottie, 240p adaptive video&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture:&lt;/strong&gt; Microservices (auth, content, tutor, telemetry)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; AWS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing:&lt;/strong&gt; Unit → Integration → Production + airplane-mode QA + 2G emulation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we'd tell other teams building for emerging markets
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage budget is your #1 design constraint.&lt;/strong&gt; Build for 100 MB total or you're not in the game.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Voice is a feature, not a nice-to-have.&lt;/strong&gt; Often, it's the entire UX.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache aggressively, evict gracefully.&lt;/strong&gt; Show the student progress on what's downloading.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test on the actual phone.&lt;/strong&gt; Borrow a friend's old Redmi 7. Open the app there. Cry. Fix.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build for offline-first. Sync is second.&lt;/strong&gt; Reverse this and you'll burn months.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local trust is the acquisition channel.&lt;/strong&gt; Engineer for the teachers who'll evangelize you.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building for emerging markets, rural India, or low-resource environments?
&lt;/h2&gt;

&lt;p&gt;The playbook for premium-market apps is different from the playbook for budget-phone, low-bandwidth, partially-literate audiences. If you're building in this space, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has shipped Flutter apps across rural education, dairy delivery, and field-work apps. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>mobile</category>
      <category>performance</category>
    </item>
    <item>
      <title>Building a LegalTech Super-App: Mapping 7 Personas in Figma Before Writing Code</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Wed, 06 May 2026 14:28:18 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/building-a-legaltech-super-app-mapping-7-personas-in-figma-before-writing-code-2ncc</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/building-a-legaltech-super-app-mapping-7-personas-in-figma-before-writing-code-2ncc</guid>
      <description>&lt;p&gt;Most LegalTech apps are course platforms with a chat bolted on, or chat platforms with a forum bolted on. We built &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Legal Owl&lt;/a&gt; at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; as a real legal-education super-app: structured courses, a community forum, legal journals, and an Advisor Hub where users talk to lawyers via in-app voice or scheduled appointment. Seven distinct user personas, one product.&lt;/p&gt;

&lt;p&gt;The biggest decision we made on Legal Owl wasn't an architectural one. It was a Figma one: we spent three full weeks mapping all seven personas in Figma before any engineering started. Here's why, and the architecture that followed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The seven personas
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Law student&lt;/strong&gt; — wants courses, study notes, exam prep, peer community&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Practicing junior lawyer&lt;/strong&gt; — wants advanced courses, case-law journals, mentorship&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Senior lawyer offering paid time&lt;/strong&gt; — wants a clean booking system, payouts, calendar control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volunteer lawyer answering free questions&lt;/strong&gt; — wants moderation tools, batched responses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End-user with a legal question&lt;/strong&gt; — wants quick anonymous answers + paid escalation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Course author&lt;/strong&gt; — wants authoring tools, royalty reports, student feedback&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Platform admin&lt;/strong&gt; — wants moderation queues, payout management, analytics&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Notice: a single "user" can occupy multiple personas at once. A practicing lawyer can also be a course author and a senior lawyer offering paid time. The persona is a &lt;em&gt;role&lt;/em&gt;, not an identity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why three weeks in Figma
&lt;/h2&gt;

&lt;p&gt;If we'd started building immediately, we'd have shipped an MVP for personas 1 and 5 (the easiest two), then spent 6 months retrofitting the rest. By mapping all seven up front, we caught dozens of cross-persona conflicts before they became code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Course authors and senior lawyers both need a "my earnings" view — one source of truth, two entry points&lt;/li&gt;
&lt;li&gt;Volunteer lawyers shouldn't see "paid escalation" prompts; the same question UI must hide one button based on context&lt;/li&gt;
&lt;li&gt;Admins need to see &lt;em&gt;everything&lt;/em&gt; but never become a bottleneck for routine moderation — community moderators handle it day-to-day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These conflicts are designed in Figma. They're impossibly painful to refactor in code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The role system
&lt;/h2&gt;

&lt;p&gt;After Figma, we modeled the data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users
  id, name, email, phone, ...

user_roles
  user_id, role_type (student | junior_lawyer | senior_lawyer | volunteer | end_user | course_author | admin)
  granted_at, granted_by, status, ...

role_capabilities
  role_type, capability (e.g. 'create_course', 'moderate_forum', 'accept_paid_call')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A user can hold multiple roles. The UI checks role_capabilities, never role_type directly. "Can this user create a course?" is &lt;code&gt;user has any role with capability 'create_course'&lt;/code&gt;. Adding a new role tomorrow is data, not code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Advisor Hub: real-time voice + scheduled calls
&lt;/h2&gt;

&lt;p&gt;The in-app voice call between a user and a lawyer is two-way audio with on-screen call controls and post-call notes. We use WebRTC for the audio path and a signaling service over WebSockets for setup.&lt;/p&gt;

&lt;p&gt;The complications nobody warns you about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Call quality on flaky networks.&lt;/strong&gt; A lawyer on Wi-Fi, a user on 3G in a Tier-2 city. We layer adaptive bitrate + reconnect-on-drop into the client.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lawful recording (where required).&lt;/strong&gt; Some calls must be recorded for compliance. Recording is server-side, not client-side, with explicit consent UI before the call starts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Billing precision.&lt;/strong&gt; Charges are per-minute, but the user expects to pay for the actual duration on their screen, not what the server logs. We reconcile both client and server timestamps and bill on the lower of the two.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scheduled appointments are simpler: a calendar UI, time-zone-aware booking, automatic reminder notifications, a join-call button that becomes active 5 minutes before start time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Course delivery
&lt;/h2&gt;

&lt;p&gt;Our course module is a custom build, not a Moodle wrapper. Why: we needed to deeply integrate course completion with role progression ("complete this course to unlock the volunteer-lawyer application") and with the Advisor Hub ("after this course, book a 15-min consultation with the author").&lt;/p&gt;

&lt;p&gt;Video is hosted on AWS S3 + CloudFront with signed URLs (24h expiry) so course content can't be hot-linked. Video progress is tracked at 5-second granularity for resume-where-you-left-off.&lt;/p&gt;

&lt;h2&gt;
  
  
  The forum and journals
&lt;/h2&gt;

&lt;p&gt;The forum is a standard threaded structure (post, comment, reply) with role-aware moderation. Journals are long-form articles authored by senior lawyers with peer review; we built a lightweight "editor + reviewer + publish" workflow.&lt;/p&gt;

&lt;p&gt;Both surface as separate screens in the app but share a unified search index, so a query like "contract breach" returns courses, forum threads, and journal articles together.&lt;/p&gt;

&lt;h2&gt;
  
  
  The admin panel
&lt;/h2&gt;

&lt;p&gt;The admin panel has the longest changelog of any screen we've built. Operations live here: payout reconciliation, dispute resolution, user verification, course approval, journal publication, community moderation escalation, analytics. We built it as a Next.js app with role-based section visibility — every admin sees only the modules their role allows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mobile:&lt;/strong&gt; Flutter (iOS + Android)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web:&lt;/strong&gt; Next.js (course web client + admin)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js microservices + PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time:&lt;/strong&gt; WebSockets (signaling, chat, presence) + WebRTC (audio)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background jobs:&lt;/strong&gt; RabbitMQ (course reminders, payout batches, appointment notifications)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture:&lt;/strong&gt; Microservices (auth, courses, forum, advisor, payouts, notifications)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design:&lt;/strong&gt; Figma (7 persona maps, full design system) → Production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; AWS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing:&lt;/strong&gt; Unit → Integration → Production&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we'd tell other LegalTech teams
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start in Figma. Stay in Figma.&lt;/strong&gt; Map every persona before a single line of code. The cost of refactoring an architecture is 50x the cost of redrawing a flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Roles, not types.&lt;/strong&gt; Don't hardcode &lt;code&gt;if user.is_lawyer&lt;/code&gt; checks. Build a role + capability system from day one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recording is a compliance feature, not an engineering feature.&lt;/strong&gt; Loop legal counsel in early; some jurisdictions require explicit signed consent before each recording.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't build your own video hosting.&lt;/strong&gt; Use S3 + CloudFront + signed URLs. The bandwidth math will surprise you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build the admin panel as a first-class product.&lt;/strong&gt; Operations teams use it 8 hours a day. If it's slow or messy, your operating costs balloon.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a LegalTech, EdTech, or multi-persona platform?
&lt;/h2&gt;

&lt;p&gt;Whether it's legal, medical, financial, or any domain with regulated personas — the work is in the role mapping and the cross-persona flows, not the individual screens. If you're building one, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has shipped this archetype across legal education, healthcare delivery, and edtech. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>design</category>
      <category>product</category>
      <category>ux</category>
    </item>
    <item>
      <title>From Figma to Flutter: Designing a System That Scales Across 30 Apps</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:38:46 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/from-figma-to-flutter-designing-a-system-that-scales-across-30-apps-3431</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/from-figma-to-flutter-designing-a-system-that-scales-across-30-apps-3431</guid>
      <description>&lt;p&gt;We've shipped 30+ Flutter apps at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; across D2C commerce, real-time sports, edtech, healthtech, legaltech, marketplaces, and more. Each project starts the same way: Figma file, design system, component library, then code.&lt;/p&gt;

&lt;p&gt;The non-obvious insight from doing this 30 times: the Figma design system and the Flutter component library should be the same artifact, conceptually. Tokens, components, layouts, type ramps — designed once, expressed in both Figma and Dart, kept in sync mechanically. When they drift, your designers and engineers stop trusting each other, and "Figma to production" becomes a punch line.&lt;/p&gt;

&lt;p&gt;Here's the workflow we've converged on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The single source of truth: design tokens
&lt;/h2&gt;

&lt;p&gt;Design tokens are the atomic units. Colors, type sizes, spacing, radii, elevations, motion durations. We define them once in a JSON-like format and &lt;em&gt;generate&lt;/em&gt; both the Figma library and the Flutter &lt;code&gt;ThemeData&lt;/code&gt; from that single file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"primary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#3F51FF"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"surface"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"#FFFFFF"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"space"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"xs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sm"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"radius"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"card"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A build script generates a Flutter &lt;code&gt;tokens.dart&lt;/code&gt; file with strongly-typed constants. Designers import the same JSON into Figma via the Tokens Studio plugin. When a designer adjusts &lt;code&gt;color.primary&lt;/code&gt;, both Figma and the Flutter app pick up the change automatically.&lt;/p&gt;

&lt;p&gt;With this in place, there is no gap between "the design says" and "the app implements". They can't disagree. They share a parent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The component library
&lt;/h2&gt;

&lt;p&gt;On top of tokens sit components. Buttons, inputs, cards, list items, modals, tab bars, snackbars. We build each component twice: once as a Figma component with variants and properties, once as a Flutter widget with named parameters that mirror those properties.&lt;/p&gt;

&lt;p&gt;The Flutter widget always uses tokens, never raw values. &lt;code&gt;padding: EdgeInsets.all(tokens.space.sm)&lt;/code&gt;, never &lt;code&gt;padding: EdgeInsets.all(8)&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The layout primitives
&lt;/h2&gt;

&lt;p&gt;Most projects re-implement the same layouts in slightly different ways: a screen with an app bar, a body, a primary action at the bottom. A modal sheet with a title, body, and dismiss button. A list page with a search bar and infinite scroll.&lt;/p&gt;

&lt;p&gt;We pre-built these as &lt;code&gt;Scaffold&lt;/code&gt;-style layout widgets in our internal package:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;XScaffold(title, body, primaryAction)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;XBottomSheet(title, body, dismissAction, confirmAction)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;XListPage(searchBar, items, onLoadMore, emptyState)&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;New projects start at the layout level, not the widget level. A login screen is two widgets, not twenty.&lt;/p&gt;

&lt;h2&gt;
  
  
  The package structure
&lt;/h2&gt;

&lt;p&gt;One shared package, multiple apps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;xenotix_design/
  lib/
    tokens/         (generated from JSON)
    components/     (XButton, XCard, XInput, ...)
    layouts/        (XScaffold, XListPage, ...)
    icons/          (custom icons + lucide passthroughs)
  test/
  pubspec.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every app pulls &lt;code&gt;xenotix_design&lt;/code&gt; as a path or git dependency. Updates to the package propagate to every app on next pubspec update.&lt;/p&gt;

&lt;p&gt;We version the package strictly. Breaking changes go in major versions. Minor versions add components or non-breaking improvements. Apps pin to a major version and update minor versions on their own cadence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The handoff workflow
&lt;/h2&gt;

&lt;p&gt;Figma to Flutter handoff is the friction point on most teams. Ours:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Designer designs in Figma using the shared library (built on shared tokens)&lt;/li&gt;
&lt;li&gt;Designer publishes a Figma frame with notes (interaction states, copy, edge cases)&lt;/li&gt;
&lt;li&gt;Engineer opens the frame, identifies which existing components are used, and which new ones are needed&lt;/li&gt;
&lt;li&gt;If a new component is needed, designer + engineer co-design it in the shared library &lt;em&gt;first&lt;/em&gt;, then both the Figma and Flutter implementations are updated&lt;/li&gt;
&lt;li&gt;Engineer implements the screen by composing existing components&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No handoff document. No "can you make this padding 14 instead of 16?" because padding is a token, and tokens are shared.&lt;/p&gt;

&lt;h2&gt;
  
  
  The design system as a Storybook
&lt;/h2&gt;

&lt;p&gt;We maintain a Flutter implementation of the design system as a runnable Storybook app: every component, every variant, every state, on a single navigable surface. Designers can scroll through it on a phone. Engineers can show "yes, this exact button in this exact state already exists, here's how to use it."&lt;/p&gt;

&lt;p&gt;Storybook also doubles as the regression-test surface. Visual snapshot tests on every component, run on every PR.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we'd tell a team starting their first Flutter design system
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tokens first, components second.&lt;/strong&gt; A component built without tokens is a tax you'll pay later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One package, not three.&lt;/strong&gt; Don't split tokens, components, and layouts into separate packages until you have a real reason. Premature splitting creates dependency-graph pain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storybook from week one.&lt;/strong&gt; It's the fastest way to catch component drift.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual diff tests in CI.&lt;/strong&gt; Catches "the button is 1 px taller in this PR" before a designer notices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't customize per app.&lt;/strong&gt; Resist the urge to fork the design system per project. Push customization through tokens (color overrides, spacing adjustments) rather than forking widgets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stack summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tokens:&lt;/strong&gt; JSON, generated to Dart and synced to Figma via Tokens Studio&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Component library:&lt;/strong&gt; custom Flutter package&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layouts:&lt;/strong&gt; custom Flutter package&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storybook:&lt;/strong&gt; runnable Flutter app per package&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution:&lt;/strong&gt; internal git monorepo, pinned versions per app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests:&lt;/strong&gt; flutter_test + alchemist or golden_toolkit for visual snapshots&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a multi-app product family?
&lt;/h2&gt;

&lt;p&gt;One design system across many apps is the difference between a coherent brand and 30 disconnected products. If you're scaling across multiple apps and need them to feel like one product, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has the playbook. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>design</category>
      <category>flutter</category>
      <category>ui</category>
    </item>
    <item>
      <title>Building a Real-Time Opinion-Trading Engine: An Anatomy</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:35:28 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/building-a-real-time-opinion-trading-engine-an-anatomy-2k4p</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/building-a-real-time-opinion-trading-engine-an-anatomy-2k4p</guid>
      <description>&lt;p&gt;If you've used Probo or any "opinion trading" app during an IPL match, you know the experience: the next over hasn't even started and you're buying YES at ₹3 that India will hit a six. Three balls later, your YES is worth ₹7 because the bowler has just been hit for two boundaries. You sell. You make ₹4 in 90 seconds.&lt;/p&gt;

&lt;p&gt;This is a real-time prediction market. Underneath the breezy UX is one of the harder engineering problems in consumer fintech. At &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; we built the trading engine for Cricket Winner. Here's the architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The model
&lt;/h2&gt;

&lt;p&gt;A market is a binary question that will resolve to YES or NO at a specific moment. "Will India win the toss?". "Will Kohli score a fifty in this innings?". "Will the next ball be a wide?".&lt;/p&gt;

&lt;p&gt;Users buy YES or NO contracts. Prices are in rupees and always sum to ₹10 (because exactly one side will pay out ₹10 on resolution). If YES is ₹7, NO is ₹3. As opinion shifts, prices move.&lt;/p&gt;

&lt;p&gt;When the market resolves, holders of the winning side get ₹10 each. Holders of the losing side get ₹0.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's hard
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Order books are real-time.&lt;/strong&gt; Every buy or sell shifts the price; clients need updates within ~200 ms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Settlement is binary and final.&lt;/strong&gt; When India wins the toss, every YES holder needs ₹10 in their wallet within seconds, deterministically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markets resolve fast.&lt;/strong&gt; A "next ball" market opens for ~30 seconds. Tens of thousands of orders may flow through in that window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Money is involved.&lt;/strong&gt; No skipped writes. No double-payouts. No drift. Wallet ledgers must reconcile down to the paise.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The pipeline
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client → REST place_order → Order Service → Kafka (trades-topic, partitioned by market_id)
                                                               ↓
                                            Matching Engine consumer (one per partition)
                                                               ↓
                                            Order book updates + matched trades
                                                               ↓
                                            Postgres write + Wallet debit/credit
                                                               ↓
                                            Redis pub/sub for price updates
                                                               ↓
                                            WebSocket gateways → Clients
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key constraint: per-market ordering must be strict. If two orders arrive at the same millisecond, only one of them can match the standing best bid; the other goes into the book or matches the next best.&lt;/p&gt;

&lt;p&gt;We enforce this by partitioning Kafka by &lt;code&gt;market_id&lt;/code&gt;, with one matching-engine consumer per partition. Within a partition, Kafka guarantees total ordering, so the matching engine processes orders one at a time, deterministically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why one matching engine per market
&lt;/h2&gt;

&lt;p&gt;A matching engine is a state machine: order book in, trades out. If two engines act on the same market simultaneously, you get races. So we run one engine per market — single-threaded, in-process, with the order book held entirely in memory.&lt;/p&gt;

&lt;p&gt;This sounds risky. "In memory" implies "lost on restart." The mitigation: every event is durably written to Kafka before the engine processes it. On restart, the engine replays all events from the beginning of the partition (or from a snapshot) and reconstructs the order book exactly.&lt;/p&gt;

&lt;p&gt;We also snapshot the order book every 30 seconds to a Postgres &lt;code&gt;order_book_snapshots&lt;/code&gt; table to bound replay time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The wallet integration
&lt;/h2&gt;

&lt;p&gt;Every trade involves two wallets: the buyer's (debited) and the seller's (credited). Both must update atomically.&lt;/p&gt;

&lt;p&gt;We never call the wallet service synchronously from the matching engine. Instead, the engine emits a &lt;code&gt;trade-executed&lt;/code&gt; event to another Kafka topic, and a wallet-update worker consumes those events and applies them as immutable rows to the wallet ledger (see &lt;a href="https://dev.to/ujjawal_tyagi_c5a84255da4/why-every-d2c-wallet-should-be-a-ledger-not-a-counter-2kok"&gt;our other post&lt;/a&gt; on why wallets are ledgers).&lt;/p&gt;

&lt;p&gt;If the wallet update fails, the trade row is marked &lt;code&gt;pending_settlement&lt;/code&gt;. A reconciliation worker retries every minute until success or hard failure. We've never lost money this way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Settlement
&lt;/h2&gt;

&lt;p&gt;When a market resolves (the official source says "India won the toss"), an admin endpoint marks the market as &lt;code&gt;settled&lt;/code&gt; with the outcome. A settlement worker reads the order book + position table, generates one payout row per holder, and pushes the payouts through the same wallet-update pipeline.&lt;/p&gt;

&lt;p&gt;Settlement is also idempotent: every payout is keyed by &lt;code&gt;(market_id, user_id)&lt;/code&gt;, so reruns don't double-pay.&lt;/p&gt;

&lt;h2&gt;
  
  
  The prices
&lt;/h2&gt;

&lt;p&gt;Prices in this model are derived from the order book. The "current price" of YES is the midpoint of the best bid and best ask in the YES order book. As the book shifts, the price shifts.&lt;/p&gt;

&lt;p&gt;We push price updates to clients via WebSocket every time the midpoint changes (deduped to ~10 Hz max, to avoid flooding mobile clients on volatile markets).&lt;/p&gt;

&lt;h2&gt;
  
  
  What's hard about real-time UX
&lt;/h2&gt;

&lt;p&gt;The trading screen has to feel instant. The user taps "Buy YES at ₹7" and the price was ₹7 &lt;em&gt;when they tapped&lt;/em&gt;. By the time the request reaches the server, it might be ₹7.50.&lt;/p&gt;

&lt;p&gt;We handle this with limit orders + slippage protection. The user's request includes the price they saw. If the actual matched price exceeds it by more than the user's chosen slippage tolerance (default 5%), the order is rejected and the user is shown the new price. They re-confirm or back off.&lt;/p&gt;

&lt;p&gt;This is how real exchanges handle the same problem. It's table stakes for fairness.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we'd do differently
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Snapshot more aggressively.&lt;/strong&gt; 30 seconds is fine; 5 seconds is better. Replay time matters during incident recovery.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use a separate Kafka cluster for the trade pipeline.&lt;/strong&gt; Don't share with general application events. Trade volume is bursty and you don't want it competing for broker resources during match days.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-warm matching engines for upcoming markets.&lt;/strong&gt; When a market opens 30 seconds before tipoff, the engine should already be ready, not cold-starting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build a dedicated reconciliation dashboard from day one.&lt;/strong&gt; When something goes wrong, you need a UI to see exactly which trades didn't settle, why, and a single-click "retry" button.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stack summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mobile:&lt;/strong&gt; Flutter&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web:&lt;/strong&gt; Next.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API gateway:&lt;/strong&gt; Node.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Matching engine:&lt;/strong&gt; Node.js single-threaded worker per market partition&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event bus:&lt;/strong&gt; Kafka, partitioned by market_id&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time:&lt;/strong&gt; WebSockets + Redis pub/sub&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wallet:&lt;/strong&gt; PostgreSQL ledger&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snapshots / reconciliation:&lt;/strong&gt; PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; AWS MSK + ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a prediction market or trading product?
&lt;/h2&gt;

&lt;p&gt;Real-time markets are unforgiving — every drift between client price, server price, and settlement value erodes trust. If you're building one, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has shipped the full stack from Flutter UX to Kafka matching engine to settlement reconciliation. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
      <category>showdev</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>AWS Deployment Pipeline for Indian Startups: Our GitHub Actions + ECS Fargate Setup</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:34:54 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/aws-deployment-pipeline-for-indian-startups-our-github-actions-ecs-fargate-setup-2p1d</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/aws-deployment-pipeline-for-indian-startups-our-github-actions-ecs-fargate-setup-2p1d</guid>
      <description>&lt;p&gt;We deploy 30+ products from one CI/CD playbook at Xenotix Labs (&lt;a href="https://www.xenotixlabs.com" rel="noopener noreferrer"&gt;https://www.xenotixlabs.com&lt;/a&gt;). Indian startups—DPDPA-compliant, cost-efficient, fast-rollback. Here's the exact stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pipeline
&lt;/h2&gt;

&lt;p&gt;GitHub Actions for CI. Docker for packaging. AWS ECS Fargate for runtime. RDS Postgres for data. CloudFront + S3 for static. Sentry for errors. UptimeRobot for pings. That's it. We deliberately skip Kubernetes for startups under 10K MRR—the operational overhead doesn't pay off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Branch strategy
&lt;/h2&gt;

&lt;p&gt;main = production, develop = staging, feature branches = preview environments. Every PR gets a unique preview URL on a Cloudflare Pages-style serverless deployment of the frontend, plus a dedicated ECS task definition for the backend. Reviewers click the URL, test, approve. No "works on my machine" debates.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Actions workflow
&lt;/h2&gt;

&lt;p&gt;Four steps. (1) Lint and type-check on PR. (2) Run Playwright tests against the preview environment. (3) Build Docker image, push to ECR with git SHA + branch tag. (4) Update ECS service with the new image tag, wait for healthy targets, drain old ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rollback in 30 seconds
&lt;/h2&gt;

&lt;p&gt;The single-click rollback button in our internal dashboard re-deploys the previous git SHA's Docker image to ECS. We've used it twice in the last year, both times because of a third-party API change that broke our integration. 28 seconds from button-click to traffic on old version.&lt;/p&gt;

&lt;h2&gt;
  
  
  DPDPA compliance
&lt;/h2&gt;

&lt;p&gt;India's data protection law requires data localization for sensitive PII. We use ap-south-1 (Mumbai) for all customer data. Backups stay in-region. Logs that touch PII are redacted at write-time, not read-time. Encryption at rest via KMS, encryption in transit via TLS 1.3 enforced.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets management
&lt;/h2&gt;

&lt;p&gt;GitHub Actions secrets for build-time, AWS Secrets Manager for runtime. Never .env files in repo, never hardcoded API keys. Quarterly rotation enforced via a cron that creates a PR with rotated values.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost optimization
&lt;/h2&gt;

&lt;p&gt;Fargate Spot for non-critical workloads (cron jobs, async workers) saves ~50%. RDS reserved instances for the primary DB. CloudFront for static assets cuts S3 GET egress 90%. Total infra cost for a typical Veda Milk-scale product: under $300/month for the first 6 months.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apps we ship this way
&lt;/h2&gt;

&lt;p&gt;Veda Milk (D2C dairy subscription, Country Delight clone), Cricket Winner (real-time cricket on Kafka + WebSockets), Legal Owl (LegalTech super-app with 7 user personas), ClaimsMitra (insurance survey platform with 114+ REST APIs), Growara (AI WhatsApp automation), 7S Samiti (offline-first AI tutor for rural India). 30+ products shipped, same playbook.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hiring us
&lt;/h2&gt;

&lt;p&gt;If you are a founder shipping production infrastructure on AWS without DevOps headcount, we'd love to talk. Visit &lt;a href="https://www.xenotixlabs.com" rel="noopener noreferrer"&gt;https://www.xenotixlabs.com&lt;/a&gt; or email &lt;a href="mailto:leadgeneration@xenotix.co.in"&gt;leadgeneration@xenotix.co.in&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cicd</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>Subscription Pause Logic Is a Week of Work. Here's How to Get It Right.</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:33:23 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/subscription-pause-logic-is-a-week-of-work-heres-how-to-get-it-right-2a6f</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/subscription-pause-logic-is-a-week-of-work-heres-how-to-get-it-right-2a6f</guid>
      <description>&lt;p&gt;The hardest feature in any subscription product isn't subscribing. It's pausing.&lt;/p&gt;

&lt;p&gt;A customer wants to pause her milk delivery from the 12th to the 20th &lt;em&gt;except&lt;/em&gt; on the 14th, because that's her son's birthday and she needs extra paneer. Resume regular delivery on the 21st. Skip Sundays as always. Pause again from the 28th to the 5th of next month for a vacation. While paused, don't bill. While paused for vacation, don't even count the days against her loyalty streak. When she resumes, push her renewal date forward by exactly the number of days paused.&lt;/p&gt;

&lt;p&gt;The UI is three taps. The backend is a week of work. At &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; we've shipped this engine for milk delivery (Veda Milk), subscription marketplaces (Prepe), snack-box subscriptions (Swaadm), and more. Here's the architecture pattern we keep reaching for.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "pause" actually means
&lt;/h2&gt;

&lt;p&gt;The naive model: a &lt;code&gt;subscriptions&lt;/code&gt; table with a &lt;code&gt;status&lt;/code&gt; column that goes &lt;code&gt;active&lt;/code&gt;, &lt;code&gt;paused&lt;/code&gt;, &lt;code&gt;cancelled&lt;/code&gt;. Then a nightly job iterates active subscriptions and generates orders. Easy.&lt;/p&gt;

&lt;p&gt;The real model: a subscription has a &lt;em&gt;recurring schedule&lt;/em&gt; (every Mon/Wed/Fri, every day except Sunday, every weekend, the 1st and 15th of each month) AND a list of &lt;em&gt;exceptions&lt;/em&gt; (skip Aug 14, skip Aug 12-20, skip Aug 28 onwards). The next delivery date is derived from both.&lt;/p&gt;

&lt;p&gt;Generating orders becomes: for each active subscription, compute the schedule for tomorrow, check if tomorrow is an exception, generate an order if not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The schema
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;subscriptions
  id, user_id, product_id, plan_id
  recurrence_rule  (rrule string or structured: days_of_week, frequency, etc.)
  start_date, end_date
  status           (active, cancelled)
  ...

subscription_exceptions
  subscription_id
  exception_type   (skip, deliver_extra, change_quantity)
  date_or_range    (single date or date range)
  reason           (vacation, special_request, system_pause, payment_failure, ...)
  created_at, created_by, metadata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice: there's no &lt;code&gt;paused&lt;/code&gt; status on the subscription. "Pause from Aug 12 to Aug 20" is just an exception of &lt;code&gt;type = skip&lt;/code&gt; over that date range. "Cancel" is the only state change to the subscription itself.&lt;/p&gt;

&lt;p&gt;This sounds like overkill. It's not. Once you model pauses as exceptions, every customer-support tool, every analytics question, and every backfill becomes trivial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating the order schedule
&lt;/h2&gt;

&lt;p&gt;For any future date &lt;code&gt;D&lt;/code&gt;, the question "will this subscription generate an order on D?" reduces to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Is &lt;code&gt;D&lt;/code&gt; between &lt;code&gt;start_date&lt;/code&gt; and &lt;code&gt;end_date&lt;/code&gt; (or no end_date)?&lt;/li&gt;
&lt;li&gt;Does &lt;code&gt;D&lt;/code&gt; match the &lt;code&gt;recurrence_rule&lt;/code&gt;?&lt;/li&gt;
&lt;li&gt;Is &lt;code&gt;D&lt;/code&gt; covered by any &lt;code&gt;subscription_exception&lt;/code&gt; of type &lt;code&gt;skip&lt;/code&gt;?&lt;/li&gt;
&lt;li&gt;If yes to (1) and (2) and no to (3), generate an order.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We encode this as a pure function: &lt;code&gt;would_generate_order(subscription, exceptions, date) -&amp;gt; boolean&lt;/code&gt;. Pure, testable, has 200+ unit tests covering edge cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The customer-support superpower
&lt;/h2&gt;

&lt;p&gt;When a customer calls saying "why didn't I get my milk on Aug 14?", support runs the function for that date with the customer's actual data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;would_generate_order&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subscription_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;date&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2026-08-14&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;false &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;matched&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt; &lt;span class="n"&gt;E45&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;vacation skip Aug 12-20&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The support agent sees exactly why, with full provenance. No mystery. No "let me escalate to engineering".&lt;/p&gt;

&lt;h2&gt;
  
  
  The vacation pause
&lt;/h2&gt;

&lt;p&gt;"Pause my subscription for a week" creates one &lt;code&gt;subscription_exception&lt;/code&gt; of type &lt;code&gt;skip&lt;/code&gt; with the date range and reason &lt;code&gt;vacation&lt;/code&gt;. Done. The recurrence rule is unchanged.&lt;/p&gt;

&lt;p&gt;When the customer un-pauses early ("actually I'm back, please resume tomorrow"), we shorten the exception's date range. The recurrence rule still hasn't changed. The subscription's status is still &lt;code&gt;active&lt;/code&gt;. The schedule for the next 30 days re-computes correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The renewal-date adjustment
&lt;/h2&gt;

&lt;p&gt;Many subscriptions are billed monthly. If a customer pauses for 7 days mid-month, you may want to push their next billing date forward by 7 days as a gesture. This is its own concern, separate from the schedule.&lt;/p&gt;

&lt;p&gt;We track &lt;code&gt;paused_days_credited&lt;/code&gt; on the subscription. Each &lt;code&gt;skip&lt;/code&gt; exception with &lt;code&gt;reason = 'vacation'&lt;/code&gt; increments the counter. The renewal worker reads the counter and pushes the renewal date forward when generating the next billing cycle.&lt;/p&gt;

&lt;p&gt;Keeping this counter separate from the schedule means the billing logic stays simple, and the schedule logic stays simple. You can debug each independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  The system-initiated pause
&lt;/h2&gt;

&lt;p&gt;Not all pauses are voluntary. If a customer's payment fails, we may auto-pause until they update their card. This is also just an exception with &lt;code&gt;reason = 'payment_failure'&lt;/code&gt;. When the payment succeeds, the worker shortens or removes the exception.&lt;/p&gt;

&lt;p&gt;Differentiating system pauses from customer pauses by &lt;code&gt;reason&lt;/code&gt; lets us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Show different UI to the customer ("please update your card" vs. "on vacation")&lt;/li&gt;
&lt;li&gt;Avoid double-counting payment-failure days as vacation credits&lt;/li&gt;
&lt;li&gt;Run analytics on involuntary churn&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we'd tell our past selves
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model schedule + exceptions, not status transitions.&lt;/strong&gt; Resist the urge to add a &lt;code&gt;paused&lt;/code&gt; boolean. It looks simpler; it isn't.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make &lt;code&gt;would_generate_order&lt;/code&gt; a pure function.&lt;/strong&gt; Test it exhaustively. It's the heart of the system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag every exception with a reason.&lt;/strong&gt; "Skip" is not enough; you need to know &lt;em&gt;why&lt;/em&gt; later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't auto-cancel on long pauses.&lt;/strong&gt; Customers come back; cancellation churn is forever. If a customer hasn't unpaused in 90 days, send a reminder, don't cancel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Show the customer their next 4 dates, computed in real time.&lt;/strong&gt; Not the recurrence rule. The actual dates. This is the single most important UX element of a subscription product — the customer needs to &lt;em&gt;know&lt;/em&gt; when their next delivery is.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a subscription product?
&lt;/h2&gt;

&lt;p&gt;Whether it's milk, meals, content, or services — subscription commerce has dozens of these subtleties that compound over 12 months. If you're building one, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has the scars from shipping subscription engines across multiple verticals. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
      <category>saas</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Why Every D2C Wallet Should Be a Ledger, Not a Counter</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:30:54 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/why-every-d2c-wallet-should-be-a-ledger-not-a-counter-2kok</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/why-every-d2c-wallet-should-be-a-ledger-not-a-counter-2kok</guid>
      <description>&lt;p&gt;Friday post-mortem: when we deleted 30,000 customer wallets by accident.&lt;/p&gt;

&lt;p&gt;Then realized we didn't.&lt;/p&gt;

&lt;p&gt;Because we'd built the wallet as a ledger, not a counter.&lt;/p&gt;

&lt;p&gt;This is one of those engineering choices that feels overcautious in week one and saves your business in month nine. At &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; we've shipped wallet systems for D2C dairy commerce (Veda Milk), subscription marketplaces (Prepe), service marketplaces (Cremaster, Housecare), insurance survey payouts (ClaimsMitra), and crypto MLM (BullBot). Different industries, same wallet architecture pattern. Here's why.&lt;/p&gt;

&lt;h2&gt;
  
  
  The two ways to model a wallet
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The counter approach.&lt;/strong&gt; A &lt;code&gt;users&lt;/code&gt; table has a &lt;code&gt;wallet_balance&lt;/code&gt; column. Every credit and debit updates the column with &lt;code&gt;UPDATE users SET wallet_balance = wallet_balance + ? WHERE id = ?&lt;/code&gt;. Simple, fast, easy to query.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ledger approach.&lt;/strong&gt; A &lt;code&gt;wallet_ledger&lt;/code&gt; table records every credit and debit as an immutable row. The user's "balance" is computed at read time as &lt;code&gt;SUM(amount)&lt;/code&gt; over their ledger entries. Slightly more storage, slightly more compute on read, but with a critical property: history is preserved.&lt;/p&gt;

&lt;p&gt;Most teams ship the counter approach because it looks simpler. Then they spend the next two years answering customer-support tickets like "why is my balance off by ₹12?" with no way to answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the ledger gives you
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Auditability.&lt;/strong&gt; Every change is a row with a timestamp, a reason code (&lt;code&gt;signup_credit&lt;/code&gt;, &lt;code&gt;order_debit&lt;/code&gt;, &lt;code&gt;refund&lt;/code&gt;, &lt;code&gt;manual_adjustment&lt;/code&gt;), an actor (user, system, admin), and a reference (which order, which subscription, which support ticket). When a customer disputes a balance, you have the receipts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reversibility.&lt;/strong&gt; When a bug double-charges customers, you don't fix it by manually editing balances. You insert reverse entries with &lt;code&gt;reason_code = 'reversal_of_X'&lt;/code&gt; linking to the bad rows. The reversal itself is now an audit-trail entry. You can prove what happened to anyone who asks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Re-derivability.&lt;/strong&gt; If your &lt;code&gt;wallet_balance&lt;/code&gt; cache (yes, you can still cache the computed balance) gets corrupted by a bad migration, you re-derive it from the ledger in one query. We've done this in production. It's a non-event when the ledger exists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency safety.&lt;/strong&gt; Two simultaneous debits from the same user can't race when each is its own row. With a counter, you're relying on database-level locking which is fragile across multiple services.&lt;/p&gt;

&lt;h2&gt;
  
  
  The schema
&lt;/h2&gt;

&lt;p&gt;Here's a stripped-down version of what we ship:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wallet_ledger
  id              (uuid, primary key)
  user_id         (foreign key)
  amount          (integer, in paise; positive = credit, negative = debit)
  reason_code     (enum: signup_credit, order_debit, refund, ...)
  reference_type  (string: 'order', 'subscription', 'support_ticket', ...)
  reference_id    (uuid, points at the entity that caused this entry)
  idempotency_key (uuid, prevents duplicate inserts)
  created_at      (timestamp)
  metadata        (jsonb, free-form for analytics)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;updated_at&lt;/code&gt;. Rows are never updated, only inserted.&lt;/p&gt;

&lt;h2&gt;
  
  
  The balance query
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;COALESCE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;balance_in_paise&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;wallet_ledger&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fast on indexed user_id even with millions of rows. If it gets slow at scale, materialize a &lt;code&gt;wallet_balance_cache&lt;/code&gt; table that stores the computed balance per user and gets updated by an after-insert trigger. The ledger remains the source of truth; the cache is just an optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Idempotency, always
&lt;/h2&gt;

&lt;p&gt;Every wallet write must be idempotent. Networks fail. Workers retry. If the same &lt;code&gt;idempotency_key&lt;/code&gt; is inserted twice, the second insert is a no-op (we use &lt;code&gt;INSERT ... ON CONFLICT (idempotency_key) DO NOTHING&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This costs one indexed column. It saves you from being the engineer at 2 a.m. who has to figure out whether the customer was double-charged.&lt;/p&gt;

&lt;h2&gt;
  
  
  The transactional wrapper
&lt;/h2&gt;

&lt;p&gt;Wallet debits never live alone. They're paired with the operation they pay for: an order placement, a subscription renewal, a service booking. We always wrap both in a single Postgres transaction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;BEGIN&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="p"&gt;(...)&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(...);&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;wallet_ledger&lt;/span&gt; &lt;span class="p"&gt;(...,&lt;/span&gt; &lt;span class="n"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...);&lt;/span&gt;
&lt;span class="k"&gt;COMMIT&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If either insert fails, both roll back. There's no state where the order exists but the wallet wasn't charged, or vice versa.&lt;/p&gt;

&lt;p&gt;For cross-service flows (order service writes the order; wallet service writes the ledger), we use the outbox pattern: the order service writes the order + an outbox row in the same transaction, and a worker picks up the outbox row and tells the wallet service to debit. Eventually consistent, never inconsistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refunds
&lt;/h2&gt;

&lt;p&gt;A refund is a positive ledger entry with &lt;code&gt;reason_code = 'refund'&lt;/code&gt; and &lt;code&gt;reference_id&lt;/code&gt; pointing at the original debit. We never "reverse" a debit by editing it. We compensate with a new entry. The customer's balance updates correctly and the audit trail shows exactly what happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reporting
&lt;/h2&gt;

&lt;p&gt;With a ledger, financial reporting is trivial. "How much did we credit users last month?" is &lt;code&gt;SUM(amount) WHERE amount &amp;gt; 0 AND reason_code = 'signup_credit' AND created_at IN (...)&lt;/code&gt;. Counter-based wallets can't answer that without a separate analytics system you forgot to build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons from production
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use an integer paise/cent column, never a float.&lt;/strong&gt; Floating-point arithmetic in money columns is how you get ₹0.0000001 errors that compound.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snapshot balances daily.&lt;/strong&gt; Even with a fast SUM query, a daily &lt;code&gt;wallet_balance_snapshot&lt;/code&gt; table lets you do historical analytics ("what was the balance on March 1?") without scanning the whole ledger.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate-limit manual_adjustment writes.&lt;/strong&gt; This is the only way for non-systematic balance changes to enter the ledger. Audit it heavily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't delete ledger entries, ever.&lt;/strong&gt; If a row was inserted by mistake, insert a compensating reversal. Deletion breaks the audit trail forever.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a wallet, points system, or money-handling product?
&lt;/h2&gt;

&lt;p&gt;Whether it's subscription wallets, marketplace earnings, escrow, points, or refunds, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has shipped ledgers that survive real customer load and real edge cases. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
      <category>database</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Building a B2B Marketplace at the Speed of a B2C App</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:28:51 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/building-a-b2b-marketplace-at-the-speed-of-a-b2c-app-30ib</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/building-a-b2b-marketplace-at-the-speed-of-a-b2c-app-30ib</guid>
      <description>&lt;p&gt;B2B marketplaces have a reputation: clunky UX, multi-day onboarding, KYC stuck in PDFs, dashboards that look like 2005, and a UX gulf between the buyer side and the seller side. Most of that is not because B2B is harder — it's because B2B teams build for the procurement officer's checklist instead of the user's experience.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; we've shipped marketplaces across home services (Cremaster, Housecare Solutions), insurance surveys (ClaimsMitra), franchise discovery (Eazybizzy), property listings (Property Kona, Go Society), wedding planning (My Shaadi Store), and bike parts (Axmile). Some are pure B2C, some pure B2B, and some are B2B2C. Here's what we've learned about giving a B2B marketplace the feel of a B2C app without losing what makes B2B work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four expectations B2C has trained users to demand
&lt;/h2&gt;

&lt;p&gt;Whether your user is a procurement officer at a 500-person company or a homeowner ordering a plumber, they bring four expectations from B2C apps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Search returns results in under 200 ms.&lt;/strong&gt; No spinner, no "please wait while we fetch".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding is under 90 seconds.&lt;/strong&gt; Tap, OTP, in.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status updates are real-time.&lt;/strong&gt; I see what's happening as it happens, not on the next page refresh.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The app works on mobile.&lt;/strong&gt; Not "works on mobile". Works &lt;em&gt;first&lt;/em&gt; on mobile.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most B2B marketplaces fail at all four. The teams that meet all four win the segment.&lt;/p&gt;

&lt;h2&gt;
  
  
  How we structure a B2B marketplace
&lt;/h2&gt;

&lt;p&gt;A B2B marketplace usually has three users with very different needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Buyer&lt;/strong&gt; (browses, requests quotes, places orders, reviews)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seller / Service Provider&lt;/strong&gt; (lists, accepts, fulfills, gets paid)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admin&lt;/strong&gt; (onboarding, dispute resolution, payouts, KYC, analytics)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We build all three as separate Flutter or Next.js apps that talk to a shared microservices backend. The buyer app is mobile-first. The seller app is mobile-first (sellers are usually on a phone in the field). The admin panel is web-first (operations teams live in dashboards).&lt;/p&gt;

&lt;p&gt;Never the same UI. Never one app with role toggles. Each persona deserves a UI built for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The sub-second search
&lt;/h2&gt;

&lt;p&gt;The procurement officer's first impression of your marketplace is your search bar. If it lags, you lose.&lt;/p&gt;

&lt;p&gt;Our stack: Postgres for canonical data, an indexed search service for query latency, Redis for caching popular query results, and a CDN-fronted Next.js or Flutter client that prefetches likely-next searches.&lt;/p&gt;

&lt;p&gt;For very large catalogs, we add typeahead with debounced 150ms requests, server-side typo tolerance, and synonym expansion (the buyer searching "plumber" should also find listings tagged "sanitary").&lt;/p&gt;

&lt;h2&gt;
  
  
  Onboarding under 90 seconds
&lt;/h2&gt;

&lt;p&gt;The biggest mistake B2B onboarding makes: asking for everything upfront. GST number, PAN, bank details, ID proof, business proof, address proof, three references, and a verification call — all before the user can see a single listing.&lt;/p&gt;

&lt;p&gt;Fix: progressive KYC. The buyer signs up with phone + OTP and gets immediate access to browse and shortlist. Higher-trust actions (placing an order over a threshold, accepting payouts as a seller) trigger the next KYC step contextually, when the value of completing it is obvious to the user.&lt;/p&gt;

&lt;p&gt;We also pre-fill what we can. PAN is usually inferable from GST. Address can be auto-detected. The user types fewer characters than they think.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-time everything (the parts that matter)
&lt;/h2&gt;

&lt;p&gt;Not every screen needs to be real-time. The ones that do, religiously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Order status&lt;/strong&gt; — "out for delivery", "arrived", "completed" updates as they happen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quote responses&lt;/strong&gt; — when a seller accepts a quote, the buyer sees it instantly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inventory levels&lt;/strong&gt; — if a seller is running low, the buyer should know before placing an order&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pricing changes&lt;/strong&gt; — if a seller updates pricing, the buyer's open carts reflect it (with a clear notice)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We use WebSockets backed by Redis pub/sub. Sellers are notified the same way. Both apps converge on the same state in under a second.&lt;/p&gt;

&lt;h2&gt;
  
  
  The seller side is harder than the buyer side
&lt;/h2&gt;

&lt;p&gt;Most teams under-invest in the seller experience. That's a mistake. Sellers are a much smaller user base than buyers, but they generate the supply that the entire marketplace runs on. If the seller app is bad, supply dries up and the marketplace dies — no matter how good the buyer experience is.&lt;/p&gt;

&lt;p&gt;The seller app has to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast on cheap phones (most field sellers use mid-range Androids)&lt;/li&gt;
&lt;li&gt;Offline-tolerant (delivery boys, surveyors, and service providers have spotty networks)&lt;/li&gt;
&lt;li&gt;Notification-rich without being annoying (a seller who misses a job loses revenue)&lt;/li&gt;
&lt;li&gt;Built around the actual seller workflow, not a buyer workflow with seller "toggles"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We usually ship a separate Flutter app for sellers, with offline-first storage, geo-fenced check-ins, and a job-acceptance flow optimized for one-tap action.&lt;/p&gt;

&lt;h2&gt;
  
  
  The admin panel
&lt;/h2&gt;

&lt;p&gt;Admin panels are where B2B marketplaces actually live or die. The operations team uses it 8 hours a day to onboard sellers, resolve disputes, manage payouts, run promotions, and answer support tickets. If it's slow or messy, the marketplace's operational cost balloons.&lt;/p&gt;

&lt;p&gt;We build admin panels as Next.js apps with role-based access, audit logs on every write, server-side filtering and pagination (admin pages often display thousands of rows), and tight integration with our background-job system for batch operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech stack we keep reaching for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mobile (buyer + seller):&lt;/strong&gt; Flutter&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web buyer (when needed):&lt;/strong&gt; Next.js (SSR for SEO)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admin:&lt;/strong&gt; Next.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js microservices + PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background jobs:&lt;/strong&gt; RabbitMQ for orchestration, cron for scheduled tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time:&lt;/strong&gt; WebSockets + Redis pub/sub&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search:&lt;/strong&gt; Postgres FTS for small catalogs, dedicated search service for large ones&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage:&lt;/strong&gt; S3 + CloudFront for media&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; AWS ECS, RDS, ElastiCache&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a marketplace?
&lt;/h2&gt;

&lt;p&gt;Whether B2B, B2C, or hybrid — the same engineering principles apply, with subtle adjustments per persona. If you're building one, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has shipped marketplaces in plumbing, insurance, real estate, weddings, food delivery, plant nurseries, and more. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>product</category>
      <category>showdev</category>
      <category>startup</category>
      <category>ux</category>
    </item>
    <item>
      <title>Shipping AI WhatsApp Automation to Production: Lessons from Growara</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:26:29 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/shipping-ai-whatsapp-automation-to-production-lessons-from-growara-3gm8</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/shipping-ai-whatsapp-automation-to-production-lessons-from-growara-3gm8</guid>
      <description>&lt;p&gt;Most AI-on-WhatsApp demos fall apart in production. The customer asks the same question three different ways. Sentiment shifts mid-conversation. The AI answers something it doesn't actually know — confidently and wrong.&lt;/p&gt;

&lt;p&gt;Building Growara at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; — an AI-powered WhatsApp automation platform — taught us that you don't deploy an LLM to WhatsApp. You deploy a &lt;em&gt;system&lt;/em&gt; that contains an LLM.&lt;/p&gt;

&lt;p&gt;Here's the architecture we settled on after shipping to production for businesses handling 100k+ messages a month.&lt;/p&gt;

&lt;h2&gt;
  
  
  The system, not the model
&lt;/h2&gt;

&lt;p&gt;A naive WhatsApp AI bot looks like this: send the message to the LLM, return whatever it says. Works for the first hundred messages. Then a customer asks "kya tum refund de doge?" and the LLM hallucinates a refund policy that doesn't exist.&lt;/p&gt;

&lt;p&gt;The production system looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;WhatsApp → Webhook&lt;/li&gt;
&lt;li&gt;Intent Classifier (small fast model)&lt;/li&gt;
&lt;li&gt;Knowledge Retrieval (vector store of business policies)&lt;/li&gt;
&lt;li&gt;LLM with retrieved context&lt;/li&gt;
&lt;li&gt;Confidence Check&lt;/li&gt;
&lt;li&gt;Reply / Escalate to Human / Ask Clarifying Question&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each layer is doing one job, and the LLM is just one component — not the whole brain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 1: Intent classification before LLM
&lt;/h2&gt;

&lt;p&gt;Every inbound message gets classified before it touches an LLM. Categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transactional intent&lt;/strong&gt; (refund, cancel, change address) → high-stakes path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Informational&lt;/strong&gt; (store hours, return policy, product specs) → retrieval path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversational&lt;/strong&gt; (greeting, smalltalk) → templated reply&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Out-of-scope&lt;/strong&gt; (medical, legal, anything unrelated) → polite decline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We use a small fine-tuned classifier model running locally — not the main LLM. Fast, cheap, deterministic. Don't pay LLM token costs for a binary decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 2: Retrieval-augmented context
&lt;/h2&gt;

&lt;p&gt;For informational intents, we never let the LLM "freestyle." We retrieve relevant content from a vector store of the business's actual policies, product catalog, FAQs, and shipping rules.&lt;/p&gt;

&lt;p&gt;The retrieved chunks are passed into the LLM as context with a hard system prompt: &lt;em&gt;"Answer ONLY using the retrieved context below. If the answer is not in the context, say you don't know and offer to escalate."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the difference between hallucinated refund policies and accurate ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 3: Confidence scoring
&lt;/h2&gt;

&lt;p&gt;Every LLM response goes through a second pass that scores confidence (does the response actually match the retrieved context?), sentiment (is the customer frustrated, happy, neutral?), and action commitment (does the response promise something the system can't actually do?).&lt;/p&gt;

&lt;p&gt;If confidence is low, or the response promises an action we can't fulfill, we escalate to a human.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 4: Hard escalation triggers
&lt;/h2&gt;

&lt;p&gt;Some things never go to the LLM. Hard-coded triggers immediately route to a human:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mentions of "complaint", "refund over [X]", "lawyer", "consumer court"&lt;/li&gt;
&lt;li&gt;Sentiment sharply negative for 2+ messages in a row&lt;/li&gt;
&lt;li&gt;Customer routed through AI 5+ times without resolution&lt;/li&gt;
&lt;li&gt;Any payment, refund, or monetary commitment over a threshold&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Better to over-escalate than to let an AI commit to a refund the business can't honor.&lt;/p&gt;

&lt;h2&gt;
  
  
  The infrastructure
&lt;/h2&gt;

&lt;p&gt;The Meta WhatsApp Business API has its own rules — message templates for outbound, 24-hour customer-service window, opt-in management, rate limits per phone number.&lt;/p&gt;

&lt;p&gt;Our stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Webhook receiver&lt;/strong&gt; — Node.js handling Meta webhooks, signature verification, dedup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message queue&lt;/strong&gt; — RabbitMQ for inbound buffering and outbound rate limiting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI orchestration&lt;/strong&gt; — Node.js running classify → retrieve → LLM → confidence pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector store&lt;/strong&gt; — per-tenant knowledge bases (each business has its own)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversation store&lt;/strong&gt; — PostgreSQL for canonical history + audit log&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human handoff&lt;/strong&gt; — Next.js dashboard with pre-drafted AI responses for human review&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Template manager&lt;/strong&gt; — for managing Meta-approved message templates&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cost guardrails
&lt;/h2&gt;

&lt;p&gt;LLMs cost per token. Without ceilings, one chatty user with a long conversation costs more than they're worth.&lt;/p&gt;

&lt;p&gt;Our rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Per-conversation token budget (10k input + 2k output max)&lt;/li&gt;
&lt;li&gt;Per-user daily cap (50k tokens/day max per phone)&lt;/li&gt;
&lt;li&gt;Per-tenant monthly cap (configurable; alerts at 80%)&lt;/li&gt;
&lt;li&gt;Cheaper models for non-critical paths (classification, summarization use smaller models)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a budget hits, gracefully degrade: handoff to human or send a templated "we'll get back to you".&lt;/p&gt;

&lt;h2&gt;
  
  
  Evals before deployment
&lt;/h2&gt;

&lt;p&gt;Every prompt change, every retrieval tweak, every model swap goes through an evaluation suite of ~500 representative conversations. Eval scores tracked in CI like test coverage — a regression below threshold blocks the merge.&lt;/p&gt;

&lt;p&gt;Without evals, "improvements" silently break edge cases. With evals, you ship with confidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we'd do differently
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build evals on day one.&lt;/strong&gt; Reverse-engineering an eval suite after 6 months of production is brutal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't trust LLM JSON output.&lt;/strong&gt; Use a smaller model or schema validator before acting on it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-translate, then prompt.&lt;/strong&gt; For multilingual (Hindi/English/Tamil), translating to English before the main LLM was more reliable than mixed-script prompts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log everything.&lt;/strong&gt; Every classification, every retrieved chunk, every score, every prompt version. When something goes wrong in production, you'll need all of it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building AI-powered customer engagement?
&lt;/h2&gt;

&lt;p&gt;Whether it's WhatsApp, in-app chat, voice, or email — production AI is a discipline. The model is 10% of the work; the rest is the system around it. If you're building in this space, &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;Xenotix Labs&lt;/a&gt; has shipped AI customer-engagement products that survive real customer load. Reach out at &lt;a href="https://xenotixlabs.com" rel="noopener noreferrer"&gt;https://xenotixlabs.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
      <category>llm</category>
    </item>
    <item>
      <title>Smart Contracts in Production: What We Learned Building a Tokenized Loyalty Platform</title>
      <dc:creator>Ujjawal Tyagi</dc:creator>
      <pubDate>Tue, 28 Apr 2026 07:23:46 +0000</pubDate>
      <link>https://dev.to/ujjawal_tyagi_c5a84255da4/smart-contracts-in-production-what-we-learned-building-a-tokenized-loyalty-platform-4pho</link>
      <guid>https://dev.to/ujjawal_tyagi_c5a84255da4/smart-contracts-in-production-what-we-learned-building-a-tokenized-loyalty-platform-4pho</guid>
      <description>&lt;p&gt;Most blockchain projects we see in India are theatre. Tokens with no utility, NFTs that nobody actually owns, smart contracts copy-pasted from a tutorial. When a client at Xenotix Labs (&lt;a href="https://www.xenotixlabs.com" rel="noopener noreferrer"&gt;https://www.xenotixlabs.com&lt;/a&gt;) asked us to build a tokenized loyalty platform for a real B2C brand, we had to make blockchain actually pay for itself. Here is what worked and what didn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The constraints
&lt;/h2&gt;

&lt;p&gt;The brand wanted: customers earn loyalty tokens for purchases, can spend tokens on cashback or merchandise, can transfer tokens peer-to-peer, and (this was the hard one) can NEVER lose tokens because of a wallet failure or seed phrase loss. Most consumers in India don't understand wallets. They just want the points to work like points.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custodial-first, with optional self-custody
&lt;/h2&gt;

&lt;p&gt;We built it custodial-first. The platform owns the wallets, the customer signs in with email + OTP, and the tokens live on a private ERC-20 deployment on Polygon. Customers can later "graduate" their wallet to self-custody if they want to, but 99% don't bother—which is fine. We use OpenZeppelin's MinimalForwarder for gasless transactions so customers never see gas fees.&lt;/p&gt;

&lt;p&gt;This got pushback from blockchain purists ("not your keys, not your coins"). We don't care. Mainstream adoption requires custodial UX. The blockchain is an implementation detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  The contract architecture
&lt;/h2&gt;

&lt;p&gt;Three contracts. (1) ERC-20 token contract with mint/burn limited to the platform multisig. (2) Loyalty engine contract that records earn and burn events with merchant ID and category, so we can do reporting on-chain. (3) P2P transfer contract with built-in compliance hooks (we throttle large transfers and require KYC above a threshold).&lt;/p&gt;

&lt;p&gt;We deliberately did NOT make the contracts upgradeable on day one. Upgradeable proxies introduce admin keys that are basically a backdoor; security auditors hate them. Instead we wrote a migration-via-snapshot pattern: when we need to upgrade, we deploy new contracts, snapshot the old state, and bulk-mint into the new contract. Manual, but auditable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Off-chain state matters more than on-chain
&lt;/h2&gt;

&lt;p&gt;Here's the unglamorous truth: 90% of our backend code is off-chain. The on-chain work is the source of truth for token balances and transfers, but everything else (merchant catalogs, redemption rules, fraud detection, customer support) lives in PostgreSQL. The blockchain is the ledger. The database is the application.&lt;/p&gt;

&lt;p&gt;We sync between the two using an indexer (we use Goldsky on top of Polygon RPC) and a reconciliation cron that flags any drift.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we'd do differently
&lt;/h2&gt;

&lt;p&gt;We'd skip Solidity for v1 and use a programmable database with cryptographic audit logs. The marginal value of public chain immutability for a closed-loop loyalty program is low. We learned this 4 months in. The next iteration of the product is moving to that direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other apps we've shipped at Xenotix Labs
&lt;/h2&gt;

&lt;p&gt;Veda Milk (D2C dairy subscription, Country Delight clone), Cricket Winner (real-time cricket on Kafka + WebSockets), Legal Owl (LegalTech super-app with 7 user personas and live lawyer calls), ClaimsMitra (insurance survey platform with 114+ REST APIs), Growara (AI WhatsApp automation), 7S Samiti (offline-first AI tutor for rural India). Stack: Flutter, Next.js, Node.js on AWS plus blockchain integrations when actually needed. 30+ products shipped.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hiring us
&lt;/h2&gt;

&lt;p&gt;If you are building a loyalty product, a tokenized rewards platform, or something that needs blockchain plumbing without blockchain theatre, we would love to talk. Visit &lt;a href="https://www.xenotixlabs.com" rel="noopener noreferrer"&gt;https://www.xenotixlabs.com&lt;/a&gt; or email &lt;a href="mailto:leadgeneration@xenotix.co.in"&gt;leadgeneration@xenotix.co.in&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>blockchain</category>
      <category>softwareengineering</category>
      <category>web3</category>
    </item>
  </channel>
</rss>
