<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roman Tsegelskyi</title>
    <description>The latest articles on DEV Community by Roman Tsegelskyi (@romantseg).</description>
    <link>https://dev.to/romantseg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/romantseg"/>
    <language>en</language>
    <item>
      <title>Building &amp; shipping joinstems.com with GenAI</title>
      <dc:creator>Roman Tsegelskyi</dc:creator>
      <pubDate>Thu, 21 Nov 2024 13:25:45 +0000</pubDate>
      <link>https://dev.to/romantseg/building-shipping-joinstemscom-with-genai-5649</link>
      <guid>https://dev.to/romantseg/building-shipping-joinstemscom-with-genai-5649</guid>
      <description>&lt;p&gt;In September 2024, we launched the beta of joinstems.com - a platform where music enthusiasts and producers can access official stems from well-known tracks and share remixes. From concept to launch took less than 5 months with a team of just 1.5 developers (well, more like 1.2 if I'm being honest), reaching 10K users and generating nearly 1K remixes in our first weeks.&lt;/p&gt;

&lt;p&gt;What makes this project fascinating is that it paralleled the rapid evolution of GenAI development tools. I started working on this project in May with GitHub Copilot for code completion, experimented with Supermaven, and ultimately found my stride with Claude and Cursor. Throughout this journey, I watched GenAI transform from a simple code completion tool into something that felt more like a collaborative development partner.&lt;/p&gt;

&lt;p&gt;Despite the endless stream of AI demos and tutorials flooding the internet, I've noticed a lack of practical accounts about using GenAI to ship actual products. After a decade of launching various products, I wanted to share real insights from building joinstems.com - both the wins and the "well, that didn't work" moments. Let's dive into what actually worked, what didn't, and what surprised me along the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Brief stack description
&lt;/h3&gt;

&lt;p&gt;First, let's talk stack. We went with T3 as the foundation - a modern TypeScript stack I knew well and trusted for its type safety and great DX. Here's what we're running with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next.js + Tailwind CSS + NextAuth.js as base&lt;/li&gt;
&lt;li&gt;tRPC + tanstack-query for type-safe APIs (type safety becomes even more crucial when co-piloting with AI)&lt;/li&gt;
&lt;li&gt;Postgres + Prisma + Neon for database&lt;/li&gt;
&lt;li&gt;Mux + WavesurferJS for audio streaming &amp;amp; visualization (the trickiest part for AI to handle, more on that later)&lt;/li&gt;
&lt;li&gt;react-admin with ra-data-simple-prisma for admin panel (where GenAI really showed its muscles)&lt;/li&gt;
&lt;li&gt;Vercel for hosting&lt;/li&gt;
&lt;li&gt;Digital Ocean Spaces for file storage (after a fun S3 cost surprise 😅)&lt;/li&gt;
&lt;li&gt;Twilio + Resend for communications&lt;/li&gt;
&lt;li&gt;Sentry for error tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This stack turned out to be an ideal foundation for AI-assisted development, though not always in ways I expected. Type safety especially proved crucial - it helped catch those occasional AI hallucinations before they became production issues.&lt;/p&gt;

&lt;p&gt;But the real magic happened in how these tools complemented GenAI development. The combination of Cursor's code generation with Prisma's schema-first approach and react-admin's patterns created a surprisingly powerful workflow. Let me show you concrete and tangible examples where GenAI was transformative for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where GenAI shined
&lt;/h2&gt;

&lt;p&gt;Let's start with the most immediate win - crushing boilerplate code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Admin Panel generation
&lt;/h3&gt;

&lt;p&gt;We chose react-admin with ra-data-simple-prisma as our foundation, which already provides solid abstractions. Adding GenAI to this stack pretty much 10x things from there.&lt;/p&gt;

&lt;p&gt;After establishing the initial architecture, the workflow became remarkably straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define the model in Prisma schema (and even that could be directly fed as part of instruction)&lt;/li&gt;
&lt;li&gt;Provide context to Claude/Cursor with a prompt like:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;For admin panel, I am using react-admin and ra-data-simple-prisma&lt;br&gt;
Master file is @AdminApp.tsx, example of entrity wrapper is @Track.tsx and supporting backend router &lt;a class="mentioned-user" href="https://dev.to/route"&gt;@route&lt;/a&gt;.ts&lt;br&gt;
Generate necessary admin pages for new Notifications model that I just created.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the model in @schema.prisma&lt;/li&gt;
&lt;li&gt;Add route.ts file for db manipulation&lt;/li&gt;
&lt;li&gt;Add @Notification.tsx file for react-admin front-end handling&lt;/li&gt;
&lt;li&gt;Update @AdminApp.tsx and &lt;a class="mentioned-user" href="https://dev.to/route"&gt;@route&lt;/a&gt;.ts to correctly handle navigation&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;The output would include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prisma route handlers&lt;/li&gt;
&lt;li&gt;React-admin frontend components&lt;/li&gt;
&lt;li&gt;Navigation updates&lt;/li&gt;
&lt;li&gt;Type-safe implementations matching existing patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's notable here is consistency and reliability. After implementing 2-3 models, the AI became remarkably accurate at maintaining our established patterns&lt;/p&gt;

&lt;h3&gt;
  
  
  Re-using Patterns Across the App
&lt;/h3&gt;

&lt;p&gt;As an extension of the previous example, one of the most appreciated benefits of GenAI was how it accelerated pattern replication across different features. Think of it like having a developer who not only remembers every pattern you've established but can instantly adapt it to new contexts - without the usual "wait, how did we do this last time?" moments.&lt;/p&gt;

&lt;p&gt;My favorite example in this project shows how we handled data loading and updates, combining server-side trpc, client-side infinite scroll, and optimistic updates. The implementation flows like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fetch initial set of remixes as part of server-rendering&lt;/li&gt;
&lt;li&gt;Pass those initial remixes to client-side component as initial data&lt;/li&gt;
&lt;li&gt;Load more remixes with infinite scroll&lt;/li&gt;
&lt;li&gt;Optimistically update remixes on user actions like upvote, bookmark&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When adapting this pattern for comments, instead of manually rewriting everything, I simply showed the AI our remix implementation with a prompt like: "Here's how we handle data loading and updates for remixes. Can you adapt this pattern for comments, keeping in mind they're nested under remixes?"&lt;/p&gt;

&lt;p&gt;The AI not only replicated the pattern but handled the nested structure naturally - a task that would have required careful manual adaptation otherwise.&lt;/p&gt;

&lt;p&gt;This pattern replication became one of our most powerful use cases for GenAI. What started as a solution for remixes became our standard approach across multiple features, each adaptation taking much less time than before. The key was having that initial pattern well-established - once we had that, GenAI became remarkably good at maintaining consistency while handling feature-specific requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Responsive Design
&lt;/h3&gt;

&lt;p&gt;In my experience on this project, GenAI turned out particularly good with adopting layouts to various viewports, and typically doing that from the first try. It would often be enough to write a desktop version and then explain how I want other viewports to look like, and the model would suggest a near perfect solution on the first attempt.&lt;/p&gt;

&lt;p&gt;For example, our remix card component on desktop displays the waveform visualization prominently with track details to the right, followed by interaction buttons (like, share, comment) underneath. When I needed to adapt this for mobile, I simply explained something along the lines of: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"For mobile viewport, I want to:&lt;br&gt;
Keep waveform as the main focus but slightly reduce its height&lt;br&gt;
Stack track title and artist info below it&lt;br&gt;
Arrange action buttons in a compact row&lt;br&gt;
Maintain tap-friendly spacing for all interactive elements"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This worked perfectly on the first try - maintaining visual hierarchy while ensuring good mobile usability. The AI understood both the design goals and our established responsive patterns, saving what would typically be multiple rounds of CSS tweaking, now reduced to a single round of minor adjustments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Unstuck
&lt;/h3&gt;

&lt;p&gt;When I would feel lazy or didn't know where to start, I would often just prompt for something, and the fact of having a discussion with AI would allow me to start things. It's like having a patient collaborator who's always ready to brainstorm, even when you're not sure what you're building yet.&lt;/p&gt;

&lt;p&gt;Instead of staring at a blank editor trying to figure out where to begin, I could start with vague prompts like: "I need to build a notification system for new comments. What are the key components we should consider?" or "Looking at our remix feed component - what would be a good first step to add sorting options?"&lt;/p&gt;

&lt;p&gt;Even if I ended up not using most of the AI's suggestions, these conversations helped overcome that initial resistance to starting. The AI's responses would often trigger thoughts like "well, that's not exactly how I want to do it, but..." - and suddenly I'm actively problem-solving instead of procrastinating.&lt;/p&gt;

&lt;p&gt;This turned out to be particularly valuable for those "I'll do it later" tasks like error handling or accessibility improvements. Having an AI to bounce ideas off of made it easier to tackle these less exciting but crucial aspects of development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where GenAI Falls Short
&lt;/h2&gt;

&lt;p&gt;As powerful as GenAI proved to be in accelerating our development, it still has areas where it falls short or remains inconsistent. Here are the main issues I encountered, and how I worked around them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance optimization
&lt;/h3&gt;

&lt;p&gt;Unless explicitly prompted about performance considerations, current models often generate suboptimal code. In our case, this manifested most clearly in over-fetching and unoptimized Prisma queries. The AI would happily generate code that pulls entire records when we only needed specific fields, or create separate queries where a single join would have been more efficient.&lt;/p&gt;

&lt;p&gt;In some cases, writing a single raw SQL query would be a much better solution, but unless explicitly instructed to consider performance implications, I haven't seen LLMs suggest this approach on their own.&lt;/p&gt;

&lt;h3&gt;
  
  
  Library-Level Challenges
&lt;/h3&gt;

&lt;p&gt;If you are combining libraries, GenAI models are only as good as the libraries themselves. AI struggles when problems need to be solved at the library level rather than through configuration or usage patterns. A prime example was our integration with Wavesurfer.js for audio visualization. We needed Wavesurfer.js waveform not to stop external media element when it gets unmounted, and it took me making a patch to the library itself.&lt;/p&gt;

&lt;p&gt;While the AI could help with basic setup and common usage patterns, when it came to core functionality issues, it kept suggesting different configuration approaches instead of looking at the library code itself. Even when the solution required modifying the library's source code, none of the GenAI models I tried ever suggested this approach, instead generating variations of the same ineffective solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Version-Sensitive Libraries
&lt;/h3&gt;

&lt;p&gt;One particular gotcha: AI often generates code for outdated library versions. I ran into this with react-admin, headless-ui, and tanstack-query, where if the exact library version wasn't specified, outdated syntax would frequently appear. The solution was to explicitly specify versions in our prompts, but interestingly enough, just referencing package.json doesn't always work reliably.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistent in Error
&lt;/h3&gt;

&lt;p&gt;An interesting quirk I would often encounter: once a model makes a mistake in understanding our codebase or implementation, it tends to consistently repeat that mistake in subsequent code generation. This creates a sort of "error cascade" where even after being corrected, during different points in the chat, the mistake pattern just gets repeated over and over again or at least not consistently fixed.&lt;/p&gt;

&lt;p&gt;For example, I noticed this frequently with syntax inconsistency. Let's say the GenAI model generated code using an outdated version of syntax. After applying the change, you as a developer would correct the syntax and/or instruct the model to use the specific version. Yet, at some point down the chat, the outdated syntax would pop up again.&lt;/p&gt;

&lt;p&gt;I experimented with various prompt qualities and approaches, and yet, I haven't been able to consistently resolve this behavior. Only realistic solution that I have seen is to restart the chat, which is unfortunate, as valuable context would often be lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Biggest Personal Takeaway
&lt;/h2&gt;

&lt;p&gt;Generated code is only as good as your prompt. I came across a quote that resonated deeply with my experience: "If you can't write it yourself, you're unlikely to be able to prompt it well." This captures the essence of working with GenAI perfectly - at least at the current stage of the technology. While models will likely evolve beyond this limitation, right now the tool primarily amplifies your existing knowledge rather than replacing it.&lt;/p&gt;

&lt;p&gt;Just as we maintain and version our codebase, I've found it crucial to store and continuously iterate on prompts. The most effective prompts often emerge through multiple refinements, and interestingly, I started using AI itself to help improve my prompts. It's a meta-learning cycle: use AI, learn what works, refine prompts, get better results.&lt;/p&gt;

&lt;p&gt;I believe that AI prompts themselves will become an important part of intellectual property in the future. They encapsulate not just the what of development, but the how - the patterns, preferences, and accumulated knowledge that make code not just functional, but well-crafted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unexpected Surprise
&lt;/h2&gt;

&lt;p&gt;An interesting discovery I made along the way: using Claude directly versus through Cursor yielded noticeably different results, likely due to Cursor's additional prompt optimization for cost efficiency. While both provided valuable assistance, their outputs often differed in style and approach. This wasn't necessarily a good or bad thing - more like having two different collaborators with their own strengths. I'd also recommend trying out Typing Mind and experimenting with different models through APIs to get a better understanding of how GenAI works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Building joinstems.com with generative AI has been an eye-opening experience. While the technology significantly accelerated our development process, it also highlighted an important reality: we're currently in an AI-assisted software engineering phase, where AI acts more as an amplifier than a replacement. It magnifies both the strengths and weaknesses of developers, making solid engineering fundamentals more crucial than ever.&lt;/p&gt;

&lt;p&gt;Despite the impressive capabilities shown in areas like pattern replication and responsive design, it's clear that GenAI tools aren't yet ready to ship even moderately complicated products end-to-end. They excel at specific tasks - crushing boilerplate, adapting established patterns, accelerating initial implementations - but still require careful oversight and an experienced hand to guide them toward production-ready solutions.&lt;/p&gt;

&lt;p&gt;However, it's also clear that we're in a new era of software development. In just two years since the release of ChatGPT, the development workflow has changed forever. The speed of progress suggests this is just the beginning.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>genai</category>
      <category>showdev</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Using GenAI to Tackle Complex Prisma Model Migrations</title>
      <dc:creator>Roman Tsegelskyi</dc:creator>
      <pubDate>Wed, 13 Nov 2024 17:01:12 +0000</pubDate>
      <link>https://dev.to/romantseg/using-genai-to-tackle-complex-prisma-model-migrations-36ch</link>
      <guid>https://dev.to/romantseg/using-genai-to-tackle-complex-prisma-model-migrations-36ch</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Need to write complex Prisma migrations (renaming, splitting, merging models)? Instead of writing SQL by hand: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Save your old schema&lt;/li&gt;
&lt;li&gt;Make changes in &lt;code&gt;schema.prisma&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run prisma migrate dev --create-only&lt;/li&gt;
&lt;li&gt;Let GenAI handle the SQL data transformations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Been using this approach while building Stems - saved hours on migrations while preserving data and relationships. Works great for anything from simple renames to complex model splits.&lt;/p&gt;

&lt;p&gt;While building Stems, I needed to split our monolithic Track model into three separate models: OfficialTrack, Stem, and Remix. This better represented our domain - official tracks have stems, which users can download to create remixes. Here's how I handled this data model evolution using GenAI to save time on SQL migrations.&lt;/p&gt;

&lt;p&gt;The change involved rougly the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Before: Everything in one model
model Track {
  id          String   @id
  title       String
  artist      String
  previewPath String?
  stemPaths   String[] // All stems stored here
  isRemix     Boolean
  parentTrack Track?   @relation("RemixOf")
  remixes     Track[]  @relation("RemixOf")
}
// After: Split into domain-specific models
model OfficialTrack {
  id     String @id
  title  String
  artist String
  stems  Stem[]
  remixes Remix[]
}
model Stem {
  id        String       @id
  title     String      // e.g., "vocals", "drums"
  audioUrl  String
  track     OfficialTrack @relation(fields: [trackId], references: [id])
  trackId   String
}
model Remix {
  id        String        @id
  title     String
  artist    String
  audioUrl  String
  original  OfficialTrack @relation(fields: [trackId], references: [id])
  trackId   String
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of manually writing the data migration, here's what worked:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Save the current schema:
&lt;code&gt;cp schema.prisma schema-old.prisma&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Update schema.prisma with the new models&lt;/li&gt;
&lt;li&gt;Generate migration scaffold:
&lt;code&gt;npx prisma migrate dev --create-only&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Prompt for AI assistance:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;I'm splitting the Track model into OfficialTrack, Stem, and Remix models.&lt;br&gt;
schema-old.prisma has the original Track model where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Non-remix tracks (isRemix=false) should become OfficialTrack records&lt;/li&gt;
&lt;li&gt;stemPaths array should be split into individual Stem records&lt;/li&gt;
&lt;li&gt;Remix tracks (isRemix=true) should become Remix records&lt;/li&gt;
&lt;li&gt;All relationships need to be preserved
Please modify the SQL migration to handle this data transformation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Review the generated SQL carefully - this kind of split is complex enough that you might need to tweak the migration or ask for adjustments&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Key learnings:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;This approach works best when you can clearly describe the transformation rules&lt;/li&gt;
&lt;li&gt;For complex splits like this, I always test the migration on a copy of production data first&lt;/li&gt;
&lt;li&gt;Breaking down the migration into smaller steps (create new tables → migrate data → update relationships → drop old table) made it easier to verify correctness&lt;/li&gt;
&lt;li&gt;Having the old schema file is crucial for AI to understand the full context
This method significantly reduced the time I would've spent writing and debugging SQL migrations, though I still needed to review and test thoroughly given the complexity of the model split.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>prisma</category>
      <category>genai</category>
      <category>typescript</category>
      <category>database</category>
    </item>
  </channel>
</rss>
