<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karthik Gundu</title>
    <description>The latest articles on DEV Community by Karthik Gundu (@karrrthik7).</description>
    <link>https://dev.to/karrrthik7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karrrthik7"/>
    <language>en</language>
    <item>
      <title>Building an A/B Testing Prototype in RUXAILAB (Without Breaking the System)</title>
      <dc:creator>Karthik Gundu</dc:creator>
      <pubDate>Fri, 20 Mar 2026 11:20:49 +0000</pubDate>
      <link>https://dev.to/karrrthik7/adding-ab-testing-to-ruxailab-building-a-prototype-without-breaking-everything-137h</link>
      <guid>https://dev.to/karrrthik7/adding-ab-testing-to-ruxailab-building-a-prototype-without-breaking-everything-137h</guid>
      <description>&lt;p&gt;There’s a big difference between building a feature in isolation and adding one to a living product.&lt;/p&gt;

&lt;p&gt;This A/B testing system started as a fairly straightforward idea: let researchers define experiments, assign participants to variants deterministically, log behavior, and visualize basic results. On paper, that sounds clean. In practice, I was integrating it into an existing Vue 3 + Vuex + Firebase application called RUXAILAB, with its own routing conventions, store patterns, Firebase setup, and a very real history of “this already works, don’t accidentally break it.”&lt;/p&gt;

&lt;p&gt;That changed the nature of the task completely.&lt;/p&gt;

&lt;p&gt;The goal wasn’t just to make an A/B testing demo. It was to build an MVP that felt native to the current system, respected the architecture already in place, and could survive the usual realities of frontend reactivity, Firebase emulators, callable functions, and local development drift. By the end, the feature worked end to end: create experiment, assign user, log event, view dashboard. But getting there was much more about integration and debugging than just writing new code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfa98gw5vtg5ynjk1dhf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfa98gw5vtg5ynjk1dhf.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Existing System
&lt;/h2&gt;

&lt;p&gt;Before building anything, I spent time reading the codebase to understand how RUXAILAB was structured.&lt;/p&gt;

&lt;p&gt;The application already had a fairly established shape. The frontend used Vue 3 with Vuex for state management, Vue Router for navigation, and Firebase for backend infrastructure. On the backend side, Firebase wasn’t just used as a database. It was doing a lot of platform work already: Firestore for persistence, Cloud Functions for server side logic, and Auth for identity.&lt;/p&gt;

&lt;p&gt;The first interesting thing I noticed was that the project already had an &lt;code&gt;experiments&lt;/code&gt; area in the codebase. It wasn’t fully aligned with the MVP I needed to build, but it was enough to tell me that the right move was not to invent a second parallel system. That’s always a trap in mature codebases. You think you’re moving fast by starting fresh, but what you’re actually doing is creating a future cleanup problem.&lt;/p&gt;

&lt;p&gt;So instead of replacing everything, I treated the existing project structure as a constraint and an advantage. I kept the experiments feature inside the established frontend pattern, reused the Vuex registration approach already present in the root store, and integrated with the project’s existing Firebase callable function style instead of introducing a separate API layer.&lt;/p&gt;

&lt;p&gt;That early decision saved a lot of pain later. The work became less about “how do I build A/B testing?” and more about “how do I make A/B testing feel like it belongs here?”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1d9jgfi9n40j463sx4l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1d9jgfi9n40j463sx4l.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Designing the A/B Testing System
&lt;/h2&gt;

&lt;p&gt;At a high level, the system had four responsibilities.&lt;/p&gt;

&lt;p&gt;First, experiments needed to be configurable. A researcher should be able to define a study ID, specify variants, set allocation weights, and save the experiment.&lt;/p&gt;

&lt;p&gt;Second, assignment needed to be deterministic. If the same participant revisited the same experiment, they should land in the same variant every time. That immediately ruled out anything random at render time. Variant selection had to be stable and backend backed.&lt;/p&gt;

&lt;p&gt;Third, events needed to be recorded in a way that could support analytics later. For the MVP, that meant a simple event collection with variant, metric, value, and timestamp.&lt;/p&gt;

&lt;p&gt;Fourth, the system needed a minimal analytics surface. Not a full statistical engine yet, but enough to verify that the experiment was alive and behaving correctly: how many users got each variant, and how many events each variant generated.&lt;/p&gt;

&lt;p&gt;That led to a design with a fairly clean split:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A frontend experiments module to handle UI, routing, and Vuex state&lt;/li&gt;
&lt;li&gt;A Firestore service layer for persistence concerns&lt;/li&gt;
&lt;li&gt;A controller layer to coordinate Firestore and callable functions&lt;/li&gt;
&lt;li&gt;Firebase callable functions for assignment and aggregation&lt;/li&gt;
&lt;li&gt;A small Python analysis stub to leave room for future statistical work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One design decision that mattered a lot was keeping the assignment logic server driven. It would have been easy to hash on the client and just write the result to Firestore, but that would have made the experiment contract much weaker. By putting assignment in a callable function, I kept the logic centralized and deterministic from one source of truth.&lt;/p&gt;

&lt;p&gt;At the same time, I added a Firestore fallback path on the client for local development. That was not the original plan, but it became necessary after hitting emulator issues. In the end, it made the system more resilient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y15knvhl628gvp6r2x1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y15knvhl628gvp6r2x1.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Details
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Experiment Module
&lt;/h3&gt;

&lt;p&gt;I created a dedicated feature module under &lt;code&gt;src/features/experiments/&lt;/code&gt; and split it into the usual layers: components, views, controllers, store, and services.&lt;/p&gt;

&lt;p&gt;The UI was intentionally minimal. One view handled experiment creation and dashboard display. Another view acted as the experiment study route, where a participant is assigned a variant and sees variant specific content. I didn’t want to overdesign the interface because the point of this MVP was the experimentation flow, not a polished analytics product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnznoy9cytg9ebpcs7toz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnznoy9cytg9ebpcs7toz.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Vuex module became the backbone of the feature. I modeled three pieces of state:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;experiments&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;assignments&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;summaries&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That was a meaningful improvement over the earlier shape because it matched the actual domain more closely. Instead of a generic “current assignment” or a single shared metrics blob, the store now tracked assignments per experiment and summaries keyed by experiment ID. That made the feature far easier to reason about once multiple experiments were in play.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deterministic Assignment
&lt;/h3&gt;

&lt;p&gt;The assignment flow was implemented with a Firebase callable function named &lt;code&gt;assignVariant&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The logic was straightforward conceptually: if an assignment already existed for a user and experiment, return it. Otherwise, hash the combination of &lt;code&gt;userId&lt;/code&gt; and &lt;code&gt;experimentId&lt;/code&gt;, convert that hash into a bucket, and walk through the experiment’s allocation distribution until a variant is selected.&lt;/p&gt;

&lt;p&gt;What mattered here was not the math itself, but the guarantee: the same input pair always yields the same output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5kynladi6wvup1b55e9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5kynladi6wvup1b55e9.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkhluyiyi67s1uj1ntr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkhluyiyi67s1uj1ntr6.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That guarantee is what makes A/B testing trustworthy. Without it, returning participants can drift between variants, and the experiment stops being an experiment.&lt;/p&gt;

&lt;p&gt;For the participant side of the prototype, I used a lightweight mock user ID persisted in local storage. That kept the flow simple while still preserving deterministic assignment behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event Logging
&lt;/h3&gt;

&lt;p&gt;Event logging was built as another callable-backed operation: &lt;code&gt;logEvent&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The event model was intentionally small. Each event stores the experiment ID, variant ID, metric name, value, and timestamp. For the MVP, that was enough to track things like study start, CTA interaction, and task completion.&lt;/p&gt;

&lt;p&gt;I wanted event logging to be extremely boring from an engineering perspective. That’s a compliment. Analytics systems get dangerous when they become too clever too early. For an MVP, boring is good. Predictable writes, simple fields, easy querying.&lt;/p&gt;

&lt;p&gt;The participant study route logs key interactions, and the dashboard rehydrates the aggregated view from stored events.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5prejgxeoxozp8jfpq4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5prejgxeoxozp8jfpq4b.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Dashboard
&lt;/h3&gt;

&lt;p&gt;The dashboard was designed to answer the immediate “is this experiment doing what I think it’s doing?” questions.&lt;/p&gt;

&lt;p&gt;For each variant, it shows traffic allocation, assignment counts, total event counts, and metric breakdowns. I also added a simple chart so you can see assignment and event distribution at a glance.&lt;/p&gt;

&lt;p&gt;This wasn’t meant to be a statistical significance engine yet. It was meant to be an operational dashboard for verifying the experiment loop. When you create an experiment, open the study route, trigger events, and then refresh the dashboard, you can see the full flow reflected back.&lt;/p&gt;

&lt;p&gt;That’s the point where the system stops being abstract architecture and starts feeling real.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wbp6sgpb7kl5rai07ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wbp6sgpb7kl5rai07ao.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqhaa9n05c1uq9mrg0ly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqhaa9n05c1uq9mrg0ly.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Faced and How They Were Solved
&lt;/h2&gt;

&lt;p&gt;This part was the most interesting, because most of the actual engineering effort went into fixing integration problems rather than writing greenfield code.&lt;/p&gt;

&lt;p&gt;The first major issue was a Vue recursive update bug. The experiment creation form used v-model with a variant editor component, and the syncing logic between parent and child looked harmless at first. But it was emitting fresh arrays and immediately re-consuming them in a watcher cycle, which caused a maximum recursive updates error inside VForm.&lt;/p&gt;

&lt;p&gt;That kind of bug is classic Vue pain—nothing looks obviously wrong until the reactivity graph starts feeding itself. The fix was to normalize and compare the variant payload before emitting updates. In other words, only emit when the value had actually changed. Once the loop was broken, the form stopped crashing.&lt;/p&gt;

&lt;p&gt;The second major issue was the Firebase internal error, and this one turned out to be trickier than it looked. Initially it seemed like the experiments callable itself might be broken. But after tracing the logs, the real problem was broader: the Functions emulator wasn’t booting the codebase correctly at all.&lt;/p&gt;

&lt;p&gt;The root cause was an unrelated eager import of nodemailer in the email function module. Since the local functions/node_modules dependencies weren’t installed correctly, the emulator failed while loading the functions bundle. That meant experiment callables never really came online, and from the frontend everything surfaced as a generic internal error.&lt;/p&gt;

&lt;p&gt;This is the kind of issue that reminds you why local dev can be deceptively hard. The failure wasn’t in the experiment logic—it was in the boot path of a different part of the backend. I fixed it by making the nodemailer import lazy so the email feature only loads that dependency when it’s actually used. That allowed unrelated functions, including the experiment endpoints, to load normally.&lt;/p&gt;

&lt;p&gt;There was also a region alignment issue. The frontend Firebase Functions client was being initialized without matching the backend configuration. That mismatch can be subtle because the code still looks valid, but requests silently go to the wrong place. I updated the frontend initialization so the client and backend were aligned.&lt;/p&gt;

&lt;p&gt;The emulator setup had its own configuration drift too. The local .env file pointed the Functions emulator to one port while the project’s Firebase config expected another. That kind of mismatch is easy to miss and frustrating to debug because nothing in the application code looks wrong. Fixing the .env and .env.example values brought the local environment back into alignment.&lt;/p&gt;

&lt;p&gt;I also spent time making the frontend fail more gracefully. At one point, backend failures were bubbling up as uncaught runtime overlays in the browser. That’s a rough developer experience—and an even worse user experience. I replaced those crash paths with proper error handling and toast notifications so failures become visible but not destructive.&lt;/p&gt;

&lt;p&gt;Finally, I had to address a product-level integration issue: A/B testing was still marked as “Coming Soon” in the method selection UI. That meant I had a working experiments module behind the scenes, but the main study creation flow was still routing users away from it. I updated the method selector so A/B testing was enabled and routed into the dedicated experiments area instead of the generic test creation wizard.&lt;/p&gt;

&lt;p&gt;That was an important reminder that “the feature works” and “the feature is reachable” are not the same thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Working Flow
&lt;/h2&gt;

&lt;p&gt;By the end of the work, the feature loop was complete.&lt;/p&gt;

&lt;p&gt;A researcher can create an experiment with variants and allocation weights. A participant entering the study route gets assigned deterministically to a variant. Their interactions generate experiment events. The dashboard then reflects assignment distribution and event counts by variant.&lt;/p&gt;

&lt;p&gt;That flow sounds simple when written in one paragraph, but getting it stable required touching state management, routing, Firebase Functions, Firestore access patterns, local emulator behavior, and frontend reactivity.&lt;/p&gt;

&lt;p&gt;That’s what made the project satisfying. It wasn’t just about making the happy path run once. It was about making the entire path hold together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;

&lt;p&gt;One of the biggest takeaways from this work is that integration complexity is often more important than feature complexity.&lt;/p&gt;

&lt;p&gt;None of the individual parts of this system were especially exotic. A form, a hash function, a few Firestore collections, a dashboard, some callable functions. But once those pieces had to live inside an existing product with existing assumptions, every seam mattered.&lt;/p&gt;

&lt;p&gt;I also came away reminded that debugging is architecture work. It’s easy to think of debugging as “cleanup after implementation,” but in reality it often reveals whether the system boundaries make sense. The Firestore fallback for local reliability, the cleaner separation between controller and service layers, and the improved error handling all came directly from problems encountered during debugging.&lt;/p&gt;

&lt;p&gt;And maybe most importantly, I was reminded that a working MVP is not the same as a disposable prototype. Even when building something minimal, if it’s integrated into a real system, it deserves the same respect you’d give any production feature: clear boundaries, understandable state, graceful failure modes, and a path for future extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building this A/B testing MVP inside RUXAILAB ended up being a lot more than adding an experiments screen.&lt;/p&gt;

&lt;p&gt;It was an exercise in reading an existing architecture carefully, extending it without fighting it, and solving the kind of real-world issues that never show up in idealized system diagrams. The final result is a working experimentation flow with deterministic assignment, event tracking, dashboard visibility, and a foundation for future statistical analysis.&lt;/p&gt;

&lt;p&gt;But more than that, it now feels like part of the product rather than a feature bolted onto the side.&lt;/p&gt;

&lt;p&gt;And to me, that’s usually the difference between code that merely runs and engineering that actually lands.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>frontend</category>
      <category>testing</category>
      <category>vue</category>
    </item>
    <item>
      <title>Understanding Before Building: My Deep Dive into RUXAILAB’s Architecture</title>
      <dc:creator>Karthik Gundu</dc:creator>
      <pubDate>Fri, 20 Mar 2026 10:04:00 +0000</pubDate>
      <link>https://dev.to/karrrthik7/understanding-before-building-my-deep-dive-into-ruxailabs-architecture-2p5o</link>
      <guid>https://dev.to/karrrthik7/understanding-before-building-my-deep-dive-into-ruxailabs-architecture-2p5o</guid>
      <description>&lt;h2&gt;
  
  
  Codebase Study — Learning the System Before Changing It
&lt;/h2&gt;

&lt;p&gt;Before writing a single line of code, I made a conscious decision:&lt;br&gt;
&lt;em&gt;I wouldn’t treat RUXAILAB as just another repository to explore—I would treat it as a system to understand.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Because adding A/B testing is not a small feature. It touches everything—study flow, user interaction, data storage, and analytics. And if I didn’t understand those pieces deeply, any implementation I write would either be fragile or disconnected from how the platform actually works.&lt;/p&gt;

&lt;p&gt;So I started from the ground up.&lt;/p&gt;

&lt;p&gt;I traced how a study is created, how participants interact with it, and how their responses move through the system. Instead of reading files one by one, I focused on &lt;strong&gt;flow&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How Vue modules manage state and drive the UI&lt;/li&gt;
&lt;li&gt;How Firebase (Firestore, Auth, Cloud Functions) handles data and logic&lt;/li&gt;
&lt;li&gt;How collections like &lt;code&gt;tests&lt;/code&gt;, &lt;code&gt;answers&lt;/code&gt;, and &lt;code&gt;users&lt;/code&gt; interact&lt;/li&gt;
&lt;li&gt;How the platform maintains consistency across the entire lifecycle of a study&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At some point, the codebase stopped feeling like scattered files and started feeling like a &lt;strong&gt;coherent pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That shift was important.&lt;/p&gt;

&lt;p&gt;Because the real goal of this phase wasn’t just understanding—it was answering a much more critical question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Where does A/B testing naturally belong in this system?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Through this exploration, I identified clear and practical integration points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;study workflow&lt;/strong&gt; as the anchor for attaching experiments&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;router layer&lt;/strong&gt; for triggering participant assignment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Functions&lt;/strong&gt; for deterministic, secure variant allocation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Firestore&lt;/strong&gt; as a scalable event logging layer for experiment metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These weren’t assumptions—they came from observing how the system already behaves and aligning with it.&lt;/p&gt;

&lt;p&gt;This is exactly reflected in my proposal design, where the A/B testing layer integrates cleanly into the existing stack without disrupting current workflows .&lt;/p&gt;

&lt;h3&gt;
  
  
  What This Phase Really Means
&lt;/h3&gt;

&lt;p&gt;This wasn’t just a “Completed” task on a list.&lt;/p&gt;

&lt;p&gt;This was the phase where I:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built a &lt;strong&gt;mental model of the entire system&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Identified &lt;strong&gt;real integration constraints&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Started thinking in terms of &lt;strong&gt;implementation, not theory&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also went a step further—while studying, I began sketching early ideas for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Experiment schemas&lt;/li&gt;
&lt;li&gt;Deterministic assignment strategies&lt;/li&gt;
&lt;li&gt;Event logging pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So this wasn’t passive learning. It was &lt;strong&gt;active system design thinking&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters for the Project
&lt;/h3&gt;

&lt;p&gt;A lot of contributions fail not because of poor coding—but because they don’t align with the architecture.&lt;/p&gt;

&lt;p&gt;By investing deeply in this stage, I’ve reduced that risk significantly.&lt;/p&gt;

&lt;p&gt;Now, when I move into implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I’m not guessing where things fit&lt;/li&gt;
&lt;li&gt;I’m not forcing new logic into the system&lt;/li&gt;
&lt;li&gt;I’m building in a way that is &lt;strong&gt;consistent, modular, and scalable&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And most importantly, I can confidently say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I understand how this system works—and I know how to extend it without breaking it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Closing Thought
&lt;/h3&gt;

&lt;p&gt;For me, this step reflects how I approach open source.&lt;/p&gt;

&lt;p&gt;Not by rushing into contributions,&lt;br&gt;
but by first understanding the intent behind the system—and then building in a way that respects and evolves it.&lt;/p&gt;

&lt;p&gt;That’s what makes this project not just achievable for me, but something I can execute &lt;strong&gt;efficiently, cleanly, and at production quality&lt;/strong&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deep Dive into the RUXAILAB Codebase: Understanding Architecture for A/B Testing Integration</title>
      <dc:creator>Karthik Gundu</dc:creator>
      <pubDate>Thu, 19 Mar 2026 14:24:59 +0000</pubDate>
      <link>https://dev.to/karrrthik7/deep-dive-into-the-ruxailab-codebase-understanding-architecture-for-ab-testing-integration-20ek</link>
      <guid>https://dev.to/karrrthik7/deep-dive-into-the-ruxailab-codebase-understanding-architecture-for-ab-testing-integration-20ek</guid>
      <description>&lt;p&gt;When I started exploring RUXAILAB, my goal was not just to understand how the platform works, but to figure out how a new system—A/B testing—could be integrated without disrupting its existing workflow.&lt;/p&gt;

&lt;p&gt;Instead of approaching the repository as a collection of files, I treated it as a system. I focused on understanding how data flows through the platform, how different layers communicate, and where extensibility points exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Core Workflow
&lt;/h2&gt;

&lt;p&gt;The first step was to map the lifecycle of a study inside RUXAILAB.&lt;/p&gt;

&lt;p&gt;From my analysis, the workflow can be summarized as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A researcher creates a study&lt;/li&gt;
&lt;li&gt;Participants interact with the study interface&lt;/li&gt;
&lt;li&gt;Responses are stored in Firestore&lt;/li&gt;
&lt;li&gt;Analytics are generated based on collected data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This flow is already well-structured and modular. What stood out to me was that RUXAILAB is not just a frontend-heavy application—it is a coordinated system involving Vue modules, Firebase services, and backend logic working together.&lt;/p&gt;

&lt;p&gt;Understanding this flow was critical because A/B testing is not a standalone feature—it needs to plug directly into this lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring the Architecture
&lt;/h2&gt;

&lt;p&gt;RUXAILAB follows a clean separation of concerns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend (Vue 3 + Vuex):&lt;/strong&gt; Handles UI, state management, and user interaction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend (Firebase):&lt;/strong&gt; Manages authentication, Firestore data, and Cloud Functions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analysis Layer (Python):&lt;/strong&gt; Processes data for insights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layered architecture (also described in the proposal’s system design on page 5 ) makes the system highly extensible.&lt;/p&gt;

&lt;p&gt;Rather than modifying existing components, new functionality can be introduced as a separate module that integrates with these layers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identifying Integration Points
&lt;/h2&gt;

&lt;p&gt;A key part of my study was identifying where A/B testing naturally fits into the system.&lt;/p&gt;

&lt;p&gt;I found three critical integration points:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Study Entry (Router Layer)
&lt;/h3&gt;

&lt;p&gt;When a participant enters a study, this is the ideal moment to assign them to a variant.&lt;/p&gt;

&lt;p&gt;This aligns with the existing router-based flow described in the proposal (page 9), where logic can be injected without affecting UI components.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Data Layer (Firestore)
&lt;/h3&gt;

&lt;p&gt;The existing collections such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;tests&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;answers&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;users&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;can be extended with an &lt;code&gt;experiments&lt;/code&gt; collection (as proposed on page 6 ).&lt;/p&gt;

&lt;p&gt;This allows experiment data to coexist with current study data without breaking compatibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Interaction Layer (Event Logging)
&lt;/h3&gt;

&lt;p&gt;User interactions during studies already generate meaningful data.&lt;/p&gt;

&lt;p&gt;By introducing an event logging system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;task completion&lt;/li&gt;
&lt;li&gt;time on task&lt;/li&gt;
&lt;li&gt;error count&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;these interactions can be reused as experiment metrics without redesigning the data model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Insight: Modular Extension Over Modification
&lt;/h2&gt;

&lt;p&gt;One of the most important realizations during this study was:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The best way to integrate A/B testing into RUXAILAB is not by changing the existing system, but by extending it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This aligns directly with the architecture goal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keep the current workflow intact&lt;/li&gt;
&lt;li&gt;introduce experimentation as a modular layer&lt;/li&gt;
&lt;li&gt;reuse existing data pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges While Studying the Codebase
&lt;/h2&gt;

&lt;p&gt;While exploring the repository, I encountered a few practical challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understanding how Vuex modules interact across features&lt;/li&gt;
&lt;li&gt;Tracing Firestore data flow between frontend and backend&lt;/li&gt;
&lt;li&gt;Identifying where business logic resides (frontend vs Cloud Functions)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To overcome this, I:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;followed data instead of files&lt;/li&gt;
&lt;li&gt;traced end-to-end flows instead of isolated components&lt;/li&gt;
&lt;li&gt;mapped interactions between frontend, backend, and database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach made it easier to understand not just &lt;em&gt;what the code does&lt;/em&gt;, but &lt;em&gt;why it is structured that way&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Understanding to Implementation
&lt;/h2&gt;

&lt;p&gt;This codebase study was not just theoretical.&lt;/p&gt;

&lt;p&gt;It directly influenced how I designed and implemented the A/B testing prototype:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created a modular &lt;code&gt;experiments&lt;/code&gt; feature&lt;/li&gt;
&lt;li&gt;Used Cloud Functions for deterministic assignment&lt;/li&gt;
&lt;li&gt;Designed Firestore schema aligned with existing collections&lt;/li&gt;
&lt;li&gt;Integrated event logging into study interactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of this groundwork, the prototype fits naturally into RUXAILAB’s architecture instead of feeling like an external addition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Studying the RUXAILAB codebase gave me a clear understanding of how a real-world UX research platform is structured and how new features can be introduced without breaking existing systems.&lt;/p&gt;

&lt;p&gt;More importantly, it helped me move from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reading code → to reasoning about systems&lt;/li&gt;
&lt;li&gt;understanding features → to designing integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This experience gave me confidence that I can not only implement the A/B testing framework, but do so in a way that aligns with RUXAILAB’s architecture, maintains code quality, and supports future extensibility.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>softwareengineering</category>
      <category>systemdesign</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
