<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Krešimir Iličić</title>
    <description>The latest articles on DEV Community by Krešimir Iličić (@kresohr).</description>
    <link>https://dev.to/kresohr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kresohr"/>
    <language>en</language>
    <item>
      <title>Guided Vibecoding YouTube Summary Project</title>
      <dc:creator>Krešimir Iličić</dc:creator>
      <pubDate>Sun, 15 Feb 2026 15:08:04 +0000</pubDate>
      <link>https://dev.to/kresohr/guided-vibecoding-youtube-summary-project-50p5</link>
      <guid>https://dev.to/kresohr/guided-vibecoding-youtube-summary-project-50p5</guid>
      <description>&lt;p&gt;Right now we are in a phase where everyone creates their own "AI SAAS". LinkedIn is flooded with these "AI will replace all devs" dumb posts and I can't wait for it all to fade away.&lt;/p&gt;

&lt;p&gt;In the meantime, I've decided to take AI and use it as it should be used, &lt;strong&gt;as a tool&lt;/strong&gt;. However, this time I wanted to go all-in and see how good code could the &lt;strong&gt;best AI models&lt;/strong&gt; write if I guide it from architectural side and review what it's doing.&lt;/p&gt;

&lt;p&gt;The majority of the code was written by &lt;strong&gt;Claude Opus 4.6&lt;/strong&gt; while minor tasks were done either by Claude Sonnet 4.5 or GPT-5 mini for super simple ones.&lt;/p&gt;

&lt;p&gt;But first, let's start with from the beginning.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem origin
&lt;/h2&gt;

&lt;p&gt;It's super hard to keep up with all the newsletters, YouTube channels, documentations and &lt;strong&gt;I simply don't have enough time to go through it all&lt;/strong&gt;. So what I usually did is if I run into an interesting YouTube video but it's too long, I'd use Perplexity or other AI platforms to summarize it for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual workload
&lt;/h3&gt;

&lt;p&gt;Even though that process saves time, it is still manual and I would need to browse through YouTube to find what I want to summarize in the first place. &lt;/p&gt;

&lt;p&gt;I knew there is room for further improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;The first idea that came to my mind is &lt;strong&gt;what if&lt;/strong&gt; I could have a scheduled job of curated YouTube channels that I handpick summarized each day. Sure, I'd need to spend some time initially to choose which channels to put there, but that's still better than the previous process.&lt;/p&gt;

&lt;p&gt;It's a 1 hour total investment vs 1 hour daily investment.&lt;/p&gt;

&lt;p&gt;That's when the idea was born.&lt;/p&gt;

&lt;p&gt;I wanted to have some kind of a "dashboard" which I could access from multiple devices hosted on a server and available 24/7. The design had to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responsive&lt;/li&gt;
&lt;li&gt;Simple&lt;/li&gt;
&lt;li&gt;Intuitive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The channels would be added through an "Admin panel" where I could add or remove those I want summarized. The trick is, I didn't want to summarize all available videos from each channel as some of them might have more than a thousand videos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkwzidouxnt50bxoa7gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkwzidouxnt50bxoa7gm.png" alt="Admin panel" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's when I decided to put a scheduler (cron) job to fetch videos from the channels on the list every day at a certain time. If the channel had any new videos published in the last 24 hours, those will get summarized and shown on the dashboard.&lt;/p&gt;

&lt;p&gt;Since the page would be publicly available, I had to protect it with some kind of a login for the "admin panel" as I didn't want other people messing with it. The stack I chose was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VueJS for the frontend&lt;/li&gt;
&lt;li&gt;ExpressJS for the backend&lt;/li&gt;
&lt;li&gt;PostgreSQL for DB&lt;/li&gt;
&lt;li&gt;... and a few more external services visible in the diagram&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh1cfbtr923dprupxcfy.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh1cfbtr923dprupxcfy.webp" alt="Architectural Diagram" width="800" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Fetching the videos
&lt;/h3&gt;

&lt;p&gt;The first piece of the puzzle was to fetch YouTube videos of given channel. To do that, we can use the &lt;a href="https://developers.google.com/youtube/v3" rel="noopener noreferrer"&gt;YouTube Data API&lt;/a&gt; which allows us to fetch videos sorted by date.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1okx0h07vdvw2dmqntw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1okx0h07vdvw2dmqntw.png" alt="Youtube Data Api Fetch" width="513" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are only fetching videos published in the last 24h so we are passing the "publishedAfter" prop to get those.&lt;/p&gt;

&lt;p&gt;After fetching, we need to double-check the database to see if that video was already processed. If it wasn't, proceed to fetch transcription for the given video.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fetching the transcription
&lt;/h3&gt;

&lt;p&gt;Fetching the video transcription can be done through many different 3rd party libraries, but &lt;strong&gt;I wanted to do it for free&lt;/strong&gt;. To do that we need to use the native YouTube transcript API which is used in NPM package called "&lt;strong&gt;youtube-transcript-plus&lt;/strong&gt;". The downside of this approach is that YouTube can change its API in any time and break the app, but since I'm building it for myself, it was a tradeoff I can live with.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summarize with AI
&lt;/h3&gt;

&lt;p&gt;Now comes the fun part. How to choose an AI model with so many to choose from? Well, my goal was super simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spend the least amount of money&lt;/li&gt;
&lt;li&gt;Doesn't have to be the best model on market&lt;/li&gt;
&lt;li&gt;Gets the job done&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I decided that it doesn't have to be the "best model" because the task is super simple and generally all AI models are good enough with such simple tasks. After all, it's taking a bunch of text and summarizing it.&lt;/p&gt;

&lt;p&gt;So which one did I choose? Random &lt;a href="https://openrouter.ai/models/?q=free" rel="noopener noreferrer"&gt;free models from openrouter&lt;/a&gt;. What do I mean by "random" ? Well, OpenRouter can be seen as a platform for different AI model providers and they can update their models to no longer be free, so I've let OpenRouter choose a model for me instead.&lt;/p&gt;

&lt;p&gt;How to do it through code? One simple line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3y5cx22idx4jq7behdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3y5cx22idx4jq7behdv.png" alt="Model for OpenRouter" width="394" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Basically, we just write "openrouter/free" as model and let the platform choose appropriate/available one for the task. We can also specify the instructions for summarizing, such as this simple one:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mxqz9ozbovy6mrjndep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mxqz9ozbovy6mrjndep.png" alt="Summarize prompt instructions" width="800" height="23"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we &lt;strong&gt;get the summary and store it into the database&lt;/strong&gt;. At that point we can access it via dashboard and it looks something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg91te6y4k4zqnzhgc6h5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg91te6y4k4zqnzhgc6h5.png" alt="YouTube Summary Dashboard" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key notes
&lt;/h2&gt;

&lt;p&gt;The project is hosted on Oracle VPS Cloud. It is dockerized and uses nginx as the reverse proxy. Total amount of money I spent during this project is less than 10$. That's because I'm using &lt;a href="https://github.com/features/copilot/plans" rel="noopener noreferrer"&gt;Github Copilot Pro&lt;/a&gt; and carefully craft every prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Room for improvement
&lt;/h3&gt;

&lt;p&gt;There is always room for improvement, depending on the end goal. I started this project as a simple way of keeping up with the news and to see which video is interesting enough for me to watch whole. It serves it purpose at the moment.&lt;/p&gt;

&lt;p&gt;However, other people might benefit from some additional features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configurable interval when the videos are fetched&lt;/li&gt;
&lt;li&gt;Configurable models to process transcripts and summarize them&lt;/li&gt;
&lt;li&gt;Multi user support&lt;/li&gt;
&lt;li&gt;Simpler way to add channels (At the moment there is no automated mechanism to fill channel ID)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything being said, it was a fun small project and the AI tools are definitely great addition to programming when properly used. I've made myself a tool that I can use each morning while I sip my coffee and that's enough for me.&lt;/p&gt;

&lt;p&gt;The most issues I've had when vibecoding this project was related to nginx and docker. I really wanted to force it out without manually intervening with it.&lt;/p&gt;

&lt;p&gt;P.S I didn't waste time optimizing anything in the code, the goal was just to get it working. This is definitely &lt;strong&gt;not a production ready project&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Github repo available at: &lt;a href="https://github.com/kresohr/youtube-summary" rel="noopener noreferrer"&gt;https://github.com/kresohr/youtube-summary&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>youtube</category>
      <category>javascript</category>
    </item>
    <item>
      <title>TypeScript to Go: Why does it really matter?</title>
      <dc:creator>Krešimir Iličić</dc:creator>
      <pubDate>Mon, 07 Jul 2025 17:57:29 +0000</pubDate>
      <link>https://dev.to/kresohr/typescript-to-go-why-does-it-really-matter-10da</link>
      <guid>https://dev.to/kresohr/typescript-to-go-why-does-it-really-matter-10da</guid>
      <description>&lt;p&gt;If you are in the web development world, you already know that TypeScript compiler will be migrated to Go (unless you've been living under a rock). But why should you care about it? Well the answer is simple, the &lt;strong&gt;typescript compiler migration to golang will improve your developer experience&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this article I'll skip the stuff you have already seen and go straight to the point. People keep claiming “oh it's 10x faster!” but let’s pause. What does this actually mean for the folks writing code every day and is there a catch?&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Really Changing?
&lt;/h2&gt;

&lt;p&gt;TypeScript itself isn’t changing. You’ll still write code the same way. What’s really changing is the compiler. You may know it as that strange thing that turns your TypeScript into JavaScript so browsers and servers can run it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before&lt;/strong&gt;: The compiler was written in TypeScript, currently running on Node.js.&lt;br&gt;
&lt;strong&gt;After&lt;/strong&gt;: The compiler will use Golang instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Go, why not simply improve on TS?
&lt;/h2&gt;

&lt;p&gt;Node.js (which is great for lots of things) just isn’t built for heavy lifting in this way. It’s single-threaded, easily gets bogged down, and eats memory like a hungry raccoon.&lt;/p&gt;

&lt;p&gt;On the other hand, Go compiles straight to machine code. It has built-in concurrency (think of it as doing lots of things at once without breaking a sweat).&lt;/p&gt;

&lt;h2&gt;
  
  
  What Will Actually Happen?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Faster Builds&lt;/strong&gt;&lt;br&gt;
You change a file and the build will finish before you blink. Editors like VS Code will feel much faster. Autocomplete, error checking, all those little helpers? They’ll speed up quite a lot, even in monster-sized projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Less Memory Meltdown&lt;/strong&gt;&lt;br&gt;
The Go compiler uses less RAM. No more laptop fans sounding like Boeing 747 when you open a big repo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Easy Installs, Fewer Headaches&lt;/strong&gt;&lt;br&gt;
Forget the “delete node_modules and pray” ritual. The Go-based compiler can be a single binary. Download, run, done. No more dependency tangles or Node version drama.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Scales Like Crazy&lt;/strong&gt;&lt;br&gt;
Got a huge codebase? Go handles it. Multi-core CPUs finally get used properly. Builds and checks can run in parallel, not stuck in a single line.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Stays the Same?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Client-Side Performance:&lt;/strong&gt;&lt;br&gt;
The JavaScript that ends up in your browser is the same as before. Browsers don’t care how fast your code was compiled, they just run the final JavaScript. So, your website or app’s speed for users won’t change because it's still JavaScript at the end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server-Side Runtime:&lt;/strong&gt;&lt;br&gt;
If your TypeScript code runs on Node.js, it also won’t run faster at runtime for the clients/requests. Only the build process is affected.&lt;/p&gt;

&lt;p&gt;You can basically think of it as an improvement for you as a developer rather than the end client. However, if we end up shipping the product faster, the clients will be happier as well 😆.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;But… Is There a Downside?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s be real. Nothing’s perfect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ecosystem Shift&lt;/strong&gt;: Some tools or plugins that relied on the old Node.js-based compiler might break or need updates. Not everything will work out of the box, at least at first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning Curve for Maintainers&lt;/strong&gt;: If you want to hack on the compiler itself, you’ll need to learn Go. Most folks won’t care, but some will.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Platform Quirks&lt;/strong&gt;: Go binaries are easy to ship, but there may be new platform-specific bugs or edge cases, especially early days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not Magic for Runtime&lt;/strong&gt;: Your actual app won’t run faster in the browser or Node.js. Only the build and tooling speed up.&lt;/p&gt;

&lt;p&gt;Happier Devs: Less frustration, more focus. That’s worth more than raw speed.&lt;/p&gt;

&lt;p&gt;To read more about the decision behind migration, you should read the "&lt;a href="https://devblogs.microsoft.com/typescript/typescript-native-port/" rel="noopener noreferrer"&gt;A 10x Faster TypeScript&lt;/a&gt;" post from Microsoft.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>go</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Fetch API Trap: When HTTP Errors Don't Land in Catch</title>
      <dc:creator>Krešimir Iličić</dc:creator>
      <pubDate>Wed, 25 Jun 2025 17:04:11 +0000</pubDate>
      <link>https://dev.to/kresohr/the-fetch-api-trap-when-http-errors-dont-land-in-catch-40l6</link>
      <guid>https://dev.to/kresohr/the-fetch-api-trap-when-http-errors-dont-land-in-catch-40l6</guid>
      <description>&lt;p&gt;Many developers assume that if an HTTP request has an error status code like 404 or 500 then the Promise will automatically reject and flow into the catch block. It might surprise you, but that's wrong. &lt;/p&gt;

&lt;p&gt;HTTP errors remain in the try block when using native fetch API unless you manually throw an error (or use something like Axios).&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually happens when we get HTTP 404 from our backend?
&lt;/h2&gt;

&lt;p&gt;The native &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API" rel="noopener noreferrer"&gt;fetch API&lt;/a&gt; is implemented in a way where it only rejects (and triggers your catch block) when there's a genuine network problem. For example, your internet cuts out or if someone misconfigures CORS.&lt;/p&gt;

&lt;p&gt;And what about those HTTP status codes like the 4xx and 5xx? They will come back as resolved promises which won't land in the catch block. UNLESS you manually &lt;strong&gt;throw new Error&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Here's what happens with native fetch
async function getFetchData() {
  try {
    const response = await fetch('https://api.example.com/nonexistent');
    console.log('Response status:', response.status);

// You've got to manually check if everything's actually okay
    if (!response.ok) {
// This line ensures it gets picked up by the catch block.
      throw new Error(`HTTP error! Status: ${response.status}`);
    }

const data = await response.json();
    return data;
  } catch (error) {
    console.error('Caught in catch block:', error);
    // By default only network failures or other errors end up here
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you work with native Fetch API, you have to specifically check that "response.ok" property to see if something went wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Confuses So Many Developers?
&lt;/h2&gt;

&lt;p&gt;No wonder so many developers get confused when you consider that popular libraries like Axios work completely differently. Axios actually does what most of us expect would happen - HTTP errors automatically trigger the catch block.&lt;/p&gt;

&lt;h3&gt;
  
  
  How?
&lt;/h3&gt;

&lt;p&gt;Axios is set in a way that the promise will get rejected if we get HTTP status like 400 or 500. You may read-up the &lt;a href="https://axios-http.com/docs/intro" rel="noopener noreferrer"&gt;Axios documentation&lt;/a&gt; to see how it works behind the scenes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// With Axios, it works like you'd expect
import axios from 'axios';

async function getAxiosData() {
  try {
    // HTTP errors (4xx, 5xx) will immediately jump to catch
    const response = await axios.get('https://api.example.com/nonexistent');
// This only executes for successful responses (2xx)
    console.log('Data:', response.data);
    return response.data;
  } catch (error) {
    // Both network errors AND HTTP errors land here
    console.error('Caught in catch block:', error.message);
    console.error('Status code:', error.response?.status);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Should You Care About This?
&lt;/h2&gt;

&lt;p&gt;This isn't just some technical trivia - it actually matters for your day-to-day coding because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Switching between libraries can introduce unexpected bugs if you're not aware of this difference&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With fetch, forgetting to check response.ok can lead to silent failures that are a nightmare to debug&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Legacy codebase might often use native fetch API rather than something like Axios &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>http</category>
      <category>axios</category>
      <category>frontend</category>
      <category>fetch</category>
    </item>
  </channel>
</rss>
