<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hank Chiu</title>
    <description>The latest articles on DEV Community by Hank Chiu (@hankchiutw).</description>
    <link>https://dev.to/hankchiutw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hankchiutw"/>
    <language>en</language>
    <item>
      <title>My Spec-Driven Development Experience: Building a Next.js and Nest.js Full-Stack Project</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Tue, 23 Sep 2025 05:18:35 +0000</pubDate>
      <link>https://dev.to/hankchiutw/my-spec-driven-development-experience-building-a-nextjs-and-nestjs-full-stack-project-2g3g</link>
      <guid>https://dev.to/hankchiutw/my-spec-driven-development-experience-building-a-nextjs-and-nestjs-full-stack-project-2g3g</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Original post: &lt;a href="https://hankchiu.tw/writings/my-spec-driven-development-experience-building-a-next-js-and-nest-js-full-stack-project/" rel="noopener noreferrer"&gt;https://hankchiu.tw/writings/my-spec-driven-development-experience-building-a-next-js-and-nest-js-full-stack-project/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;How I transformed &lt;a href="https://hankchiu.tw/writings/copilot-proxy-your-free-llm-api-for-local-development/" rel="noopener noreferrer"&gt;copilot-proxy&lt;/a&gt; into a modern full-stack application &lt;a href="https://github.com/coxy-proxy/coxy" rel="noopener noreferrer"&gt;Coxy&lt;/a&gt; using workflow-first development, AI-assisted coding, and systematic specifications&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Recently I tried spec-driven development to build &lt;a href="https://github.com/coxy-proxy/coxy" rel="noopener noreferrer"&gt;Coxy&lt;/a&gt;, a complete rewrite of my previous &lt;a href="https://hankchiu.tw/writings/copilot-proxy-your-free-llm-api-for-local-development/" rel="noopener noreferrer"&gt;copilot-proxy&lt;/a&gt; project. The experience taught me valuable lessons about modern development workflows, AI integration, and sustainable architecture decisions. Here's why I made the switch, how I approached it, and what I learned along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Rewrote copilot-proxy
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Technical Limitations of the Original Project
&lt;/h3&gt;

&lt;p&gt;The original copilot-proxy was built with SolidStart, which seemed like a good choice initially but created several challenges:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Framework Maturity Issues&lt;/strong&gt;: SolidStart is lightweight and performant, but its ecosystem isn't as mature as Next.js. I spent considerable time debugging build processes and working around framework limitations instead of focusing on features. The time investment in tooling troubleshooting outweighed the framework's benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monolithic Architecture Problems&lt;/strong&gt;: I had combined frontend and backend implementations in a single codebase, which initially seemed efficient but quickly became messy. As the proxy management features grew more complex, maintaining clear separation between API logic and UI components became increasingly difficult. This architectural decision made testing, deployment, and scaling problematic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development Experience Friction&lt;/strong&gt;: The build process was unpredictable, dependency management was complex, and the development workflow felt fragmented. These friction points slowed down iteration cycles and made the project less enjoyable to work on.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Opportunity for Spec-Driven Development
&lt;/h3&gt;

&lt;p&gt;Beyond addressing technical limitations, I wanted to experiment with spec-driven development methodology. This approach promised several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clearer requirements definition&lt;/strong&gt; before writing code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better AI collaboration&lt;/strong&gt; through structured prompts and specifications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More maintainable architecture&lt;/strong&gt; with documented decision rationales&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Systematic approach to feature development&lt;/strong&gt; reducing scope creep&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rewrite presented a perfect opportunity to implement these methodologies from the ground up, rather than retrofitting them onto existing code.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Spec-Driven Development Strategy
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Workflow-First Philosophy: Avoiding Vendor Lock-in
&lt;/h3&gt;

&lt;p&gt;My core strategy focused on &lt;strong&gt;development workflow optimization&lt;/strong&gt; rather than tool selection. This philosophy aimed to avoid vendor lock-in with specific AI tools like Cursor or Claude Code, which can limit flexibility as new tools emerge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key principles I established:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;I determine the workflow&lt;/strong&gt;, tools adapt to serve the workflow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation drives implementation&lt;/strong&gt;, not the reverse&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool-agnostic specifications&lt;/strong&gt; enable switching between AI assistants&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular architecture&lt;/strong&gt; supports different development approaches&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach meant investing time upfront in workflow design, knowing that I could optimize tool choices later without restructuring the entire development process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Workflow-First Development
&lt;/h3&gt;

&lt;p&gt;The workflow-first approach delivered three key wins: cleaner architecture through upfront planning, faster debugging with systematic processes, and easy tool switching when better options emerged.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation: From Architecture to Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Nx Monorepo: Scalable Project Structure
&lt;/h3&gt;

&lt;p&gt;I chose Nx as the foundation for Coxy's architecture, primarily to achieve clean separation between frontend and backend concerns that plagued the original copilot-proxy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategic architecture decisions:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;coxy/
├── apps/
│   ├── frontend/                # Next.js frontend
│   ├── backend/                 # Nest.js backend
│   └── frontend-e2e/            # End-to-end tests
├── libs/
│   ├── shared/              # Cross-platform utilities
│   ├── ui/                  # React component library
│   ├── logger/              # Centralized logging
│   └── types/               # TypeScript definitions
└── ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this structure worked for Coxy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clear boundaries&lt;/strong&gt; between proxy management UI and API logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared libraries&lt;/strong&gt; eliminated code duplication across apps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Independent deployments&lt;/strong&gt; for frontend and backend services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable foundation&lt;/strong&gt; supporting future microservices architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tooling Optimization: Biome Over ESLint/Prettier
&lt;/h3&gt;

&lt;p&gt;I replaced ESLint and Prettier with Biome for better performance and simpler configuration. Biome reduced setup complexity by 70%, improved build times, and required fewer dependencies to manage. The migration was straightforward with immediate performance benefits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spec-Driven Development: USER_STORY.md Files
&lt;/h3&gt;

&lt;p&gt;Following spec-driven development principles, I created comprehensive USER_STORY.md files for each Coxy feature. These documents became the foundation for all development activities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example for proxy configuration management:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt; # User Story 1: OpenAI-Compatible Proxy Endpoint

As a developer using OpenAI-compatible tools, I want to send requests to a proxy server that forwards them to GitHub Copilot so that I can use GitHub Copilot through existing OpenAI-compatible interfaces.

 ## Acceptance Criteria (EARS):
&lt;span class="p"&gt;
 -&lt;/span&gt; &lt;span class="gs"&gt;**Given**&lt;/span&gt; a valid OpenAI-compatible request, &lt;span class="gs"&gt;**when**&lt;/span&gt; sent to &lt;span class="sb"&gt;`/chat/completions`&lt;/span&gt;, &lt;span class="gs"&gt;**then**&lt;/span&gt; forward to GitHub Copilot API.
&lt;span class="p"&gt; -&lt;/span&gt; &lt;span class="gs"&gt;**Given**&lt;/span&gt; an invalid API key, &lt;span class="gs"&gt;**when**&lt;/span&gt; making a request, &lt;span class="gs"&gt;**then**&lt;/span&gt; return &lt;span class="sb"&gt;`401 Unauthorized`&lt;/span&gt;.
&lt;span class="p"&gt; -&lt;/span&gt; &lt;span class="gs"&gt;**Given**&lt;/span&gt; a malformed request, &lt;span class="gs"&gt;**when**&lt;/span&gt; sent to the proxy, &lt;span class="gs"&gt;**then**&lt;/span&gt; return &lt;span class="sb"&gt;`400 Bad Request`&lt;/span&gt; with validation errors.
&lt;span class="p"&gt; -&lt;/span&gt; &lt;span class="gs"&gt;**Where**&lt;/span&gt; the system acts as a transparent proxy between OpenAI clients and GitHub Copilot.
&lt;span class="p"&gt;
 ---
&lt;/span&gt;
 # User Story 2: Admin Dashboard

 As a system administrator, I want an admin interface to monitor and configure the proxy so that I can manage API usage, view logs, and configure settings.

 ## Acceptance Criteria (EARS):
&lt;span class="p"&gt;
 -&lt;/span&gt; &lt;span class="gs"&gt;**Given**&lt;/span&gt; admin credentials, &lt;span class="gs"&gt;**when**&lt;/span&gt; accessing &lt;span class="sb"&gt;`/admin`&lt;/span&gt;, &lt;span class="gs"&gt;**then**&lt;/span&gt; display dashboard with usage statistics.
&lt;span class="p"&gt; -&lt;/span&gt; &lt;span class="gs"&gt;**Given**&lt;/span&gt; proxy requests, &lt;span class="gs"&gt;**when**&lt;/span&gt; they occur, &lt;span class="gs"&gt;**then**&lt;/span&gt; log them for admin review.
&lt;span class="p"&gt; -&lt;/span&gt; &lt;span class="gs"&gt;**Given**&lt;/span&gt; configuration changes, &lt;span class="gs"&gt;**when**&lt;/span&gt; submitted through admin panel, &lt;span class="gs"&gt;**then**&lt;/span&gt; update proxy behavior.
&lt;span class="p"&gt; -&lt;/span&gt; &lt;span class="gs"&gt;**Where**&lt;/span&gt; admin access is protected by authentication.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Project-Wide System Prompt: Development Constraints
&lt;/h3&gt;

&lt;p&gt;I created a comprehensive system prompt that served as both a constraint and development guideline for AI assistance throughout the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Nx Project Refactoring System Prompt

## 1. Persona
You are a senior software architect and Nx expert with extensive experience in monorepo management, dependency optimization, and large-scale refactoring. You have deep knowledge of:
- Nx workspace architecture and best practices
- TypeScript/JavaScript ecosystem and tooling
- Micro-frontend and library design patterns
- Build optimization and dependency graph management
- Code migration strategies for monorepos

## 2. Task Statement
Analyze and refactor code within an Nx workspace to improve maintainability, performance, and adherence to Nx best practices while preserving functionality and minimizing breaking changes.

## 3. Context
You are working within an Nx monorepo that may contain:
- Multiple applications (React, Angular, Node.js, etc.)
- Shared libraries and utilities
- Complex inter-project dependencies
- Existing build configurations and tooling
- Team conventions and coding standards
- CI/CD pipelines that depend on the current structure

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This system prompt ensured consistency across different AI tools and development sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI-Enhanced Development Workflow
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Structured LLM Prompt Guidelines
&lt;/h3&gt;

&lt;p&gt;I developed a systematic approach to creating feature-specific prompts using "Structured LLM Prompt Guidelines." This meta-process involved:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt creation workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Analyze USER_STORY.md&lt;/strong&gt; to extract key requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feed Structured LLM Prompt Guidelines to Claude&lt;/strong&gt; for refinement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate feature-specific prompts&lt;/strong&gt; tailored to proxy management needs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test prompts across different AI tools&lt;/strong&gt; for consistency&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example Structured LLM Prompt Guidelines (partial):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Structured LLM Prompt Guidelines&lt;/span&gt;

&lt;span class="gu"&gt;## 1. Persona&lt;/span&gt;
Specify the role, expertise, or point of view you want the model to adopt.

&lt;span class="ge"&gt;_Example:_&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; You are a senior frontend developer experienced in React and TypeScript.&lt;/span&gt;

&lt;span class="gu"&gt;## 2. Task Statement&lt;/span&gt;
Clearly define the main objective or requirement of your prompt.

&lt;span class="ge"&gt;_Example:_&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; Implement a reusable modal dialog component for a web application.&lt;/span&gt;

&lt;span class="gu"&gt;## 3. Context&lt;/span&gt;
Provide relevant background information, project details, or the scope of application to ensure the model understands nuances and boundaries.

&lt;span class="ge"&gt;_Example:_&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; The component will be used across multiple pages in a SaaS dashboard built with React, styled using Tailwind CSS, and should support ARIA accessibility attributes.&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example feature prompt for chatting feature (partial):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# AI Chat Feature Implementation Prompt&lt;/span&gt;

&lt;span class="gu"&gt;## 1. Persona&lt;/span&gt;
You are a senior React/Next.js developer specializing in real-time chat interfaces and conversational UX design. You have extensive experience building intuitive messaging systems with optimistic UI updates and seamless user experiences.

&lt;span class="gu"&gt;## 2. Task Statement&lt;/span&gt;
Implement the AI chatting feature with a centered input box on the main page that transitions to a dedicated chat session page after the first message is sent, supporting continuous conversation flow with follow-up messages.

&lt;span class="gu"&gt;## 3. Context&lt;/span&gt;
This is part of a larger AI chatbot SaaS application built with Next.js 15, TypeScript, and Tailwind CSS. The chat feature should provide a smooth, intuitive experience where users start with a prominent input box and seamlessly transition into a full conversation interface. The system integrates with an external AI API and uses Clerk for authentication. Users should feel like they're having a natural conversation with instant feedback and proper message handling.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Multi-Tool AI Implementation Strategy
&lt;/h3&gt;

&lt;p&gt;I used Rovo Dev and Gemini CLI as primary implementation tools, chosen for their modern LLM models and generous free quotas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Plan Review&lt;/strong&gt;: AI generates implementation plan based on USER_STORY.md&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture Validation&lt;/strong&gt;: Manual review ensures alignment with Nx structure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterative Implementation&lt;/strong&gt;: Code generation with frequent human oversight&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bug Resolution&lt;/strong&gt;: Systematic debugging using AI assistance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why this multi-tool approach worked:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-validation&lt;/strong&gt; reduced AI hallucinations in complex scenarios&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool-specific strengths&lt;/strong&gt;: Rovo Dev excelled at backend logic, Gemini CLI at frontend components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Could switch tools based on feature requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning opportunity&lt;/strong&gt;: Compared AI tool capabilities across different use cases&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Documentation Synchronization Strategy
&lt;/h3&gt;

&lt;p&gt;A crucial aspect of the workflow was keeping specifications synchronized with implementation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Periodic documentation updates:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manual analysis&lt;/strong&gt; where AI reviewed current code against specifications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic detection&lt;/strong&gt; of specification-implementation gaps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document updates&lt;/strong&gt; reflecting architectural decisions and lessons learned&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt refinement&lt;/strong&gt; based on implementation experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This process ensured that specifications remained valuable throughout the development lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned: What Worked and What Didn't
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Worked Well
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Top-down understanding transformed my development approach.&lt;/strong&gt; Spec-driven development forced systematic thinking about requirements and architecture before implementation, leading to better decisions and more cohesive features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI revealed unknown unknowns during planning.&lt;/strong&gt; When creating proxy management specifications, AI identified edge cases, security considerations, and performance optimizations I hadn't considered, leading to more robust implementations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured specifications accelerated development.&lt;/strong&gt; Comprehensive USER_STORY.md files enabled AI tools to generate higher-quality initial code. Time invested in specifications paid dividends through faster iteration and fewer bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges That Required Adaptation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Tool switching created mental overhead.&lt;/strong&gt; Managing different AI tools required energy that could have been spent on development. I learned to batch similar tasks within single tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation became a project itself.&lt;/strong&gt; Organizing specifications and prompts required dedicated time and systematic approaches. Templates and naming conventions helped manage this complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Costs accumulated quickly.&lt;/strong&gt; AI tool quotas sometimes were exhausted faster than expected. The convenience came with real monetary costs that needed project planning consideration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps: SpecKit and Beyond
&lt;/h2&gt;

&lt;p&gt;GitHub SpecKit offers exciting opportunities to formalize my spec-driven approach. I plan to explore SpecKit integration for native GitHub workflow management, automated documentation generation, and standardized specification formats.&lt;/p&gt;

&lt;p&gt;For future projects, I'll focus on systematic prompt management with version-controlled prompt libraries, performance tracking, and automated testing. The goal is creating repeatable processes that scale across different projects and team members.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Growing with AI, Not Surrendering to It
&lt;/h2&gt;

&lt;p&gt;Building &lt;a href="https://github.com/coxy-proxy/coxy" rel="noopener noreferrer"&gt;Coxy&lt;/a&gt; with spec-driven development taught me that AI collaboration works best when humans maintain architectural ownership while leveraging AI for implementation speed. Rather than giving complete control to AI tools, the spec-driven approach helped me grow alongside AI capabilities.&lt;/p&gt;

&lt;p&gt;The key insight: systematic approaches scale better than ad-hoc AI usage. Structured prompts, comprehensive specifications, and organized workflows produce more consistent results than spontaneous AI interactions. Documentation remains critical while AI generates code quickly, human-created specifications ensure that code serves actual business requirements.&lt;/p&gt;

&lt;p&gt;The transformation from copilot-proxy to &lt;a href="https://github.com/coxy-proxy/coxy" rel="noopener noreferrer"&gt;Coxy&lt;/a&gt; represents more than a technology upgrade. It demonstrates how thoughtful AI integration with systematic development practices can elevate both project outcomes and developer capabilities.&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>githubcopilot</category>
      <category>nextjs</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Auto-generate Commit Messages with LLMs in Your Terminal</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Thu, 10 Jul 2025 05:06:19 +0000</pubDate>
      <link>https://dev.to/hankchiutw/auto-generate-commit-messages-with-llms-in-your-terminal-1a43</link>
      <guid>https://dev.to/hankchiutw/auto-generate-commit-messages-with-llms-in-your-terminal-1a43</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Original post: &lt;a href="https://hankchiu.tw/writings/auto-generate-commit-messages-with-ll-ms-in-your-terminal/" rel="noopener noreferrer"&gt;https://hankchiu.tw/writings/auto-generate-commit-messages-with-ll-ms-in-your-terminal/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Writing commit messages can be a drag. While IDEs like Cursor can automate this, what if you live in your terminal and want a fast, controllable way to generate conventional commits?&lt;/p&gt;

&lt;p&gt;This guide is for you. It's as simple as piping &lt;code&gt;git diff&lt;/code&gt; to a command-line LLM client to create well-formatted commit messages without leaving the terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Summary
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Use a non-interactive LLM client:&lt;/strong&gt; We need a tool that takes input from a pipe, sends it to the model, and prints the result to standard output.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Craft a system prompt:&lt;/strong&gt; Instruct the LLM to generate a message in the Conventional Commits format.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Create a Git alias:&lt;/strong&gt; Make the entire process accessible through a simple command like &lt;code&gt;git ca&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The System Prompt
&lt;/h3&gt;

&lt;p&gt;First, let's define our instructions for the LLM. This prompt ensures the output is consistently formatted as a Conventional Commit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write a commit message in the Conventional Commits format. Use the structure:
    &amp;lt;type&amp;gt;(&amp;lt;optional scope&amp;gt;): &amp;lt;short description&amp;gt;

    &amp;lt;optional body&amp;gt;

    &amp;lt;optional footer&amp;gt;

Example types: feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert
Optionally, include a body for more details in bullet points.
Optionally, in the footer, use BREAKING CHANGE: followed by a detailed explanation of the breaking change.

Just return the commit message, do not include any other text.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  LLM Clients for the Terminal
&lt;/h3&gt;

&lt;p&gt;Several command-line tools can handle this task. Here are a few examples of how to use them, piping your staged changes directly to the model.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://llm.datasette.io/en/stable/" rel="noopener noreferrer"&gt;LLM CLI&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git diff &lt;span class="nt"&gt;--cached&lt;/span&gt; | llm &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;your-system-prompt&amp;gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/google-gemini/gemini-cli" rel="noopener noreferrer"&gt;Gemini CLI&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git diff &lt;span class="nt"&gt;--cached&lt;/span&gt; | gemini &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;your-system-prompt&amp;gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/sigoden/aichat" rel="noopener noreferrer"&gt;aichat&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git diff &lt;span class="nt"&gt;--cached&lt;/span&gt; | aichat &lt;span class="nt"&gt;--prompt&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;your-system-prompt&amp;gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.google.com/search?q=https://github.com/anthropics/claude-cli" rel="noopener noreferrer"&gt;Claude CLI&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git diff &lt;span class="nt"&gt;--cached&lt;/span&gt; | claude &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;your-system-prompt&amp;gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create a Git Alias
&lt;/h3&gt;

&lt;p&gt;To make this truly seamless, add an alias to your &lt;code&gt;.gitconfig&lt;/code&gt;. This example uses &lt;code&gt;llm&lt;/code&gt; to commit all staged changes and then displays the latest log entry.&lt;/p&gt;

&lt;p&gt;Place your full system prompt directly into the alias or save it to a file and reference it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# In your ~/.gitconfig file
&lt;/span&gt;
&lt;span class="nn"&gt;[alias]&lt;/span&gt;
  &lt;span class="py"&gt;ca&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"!(git commit -m &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;$(git diff --cached | llm -s 'Write a commit message in the Conventional Commits format...')&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt; &amp;amp;&amp;amp; git log --stat -1)"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, instead of &lt;code&gt;git commit&lt;/code&gt;, simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git ca
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your staged changes will be sent to the LLM, the generated message will be used for the commit, and you'll see the result instantly. It's a quick, powerful way to keep your workflow moving, all from the comfort of your terminal.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>git</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Free Claude Sonnet 4 for Local Development: Setting Up Aider with Cody</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Mon, 16 Jun 2025 16:18:54 +0000</pubDate>
      <link>https://dev.to/hankchiutw/free-claude-sonnet-4-for-local-development-setting-up-aider-with-cody-39j4</link>
      <guid>https://dev.to/hankchiutw/free-claude-sonnet-4-for-local-development-setting-up-aider-with-cody-39j4</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Original post: &lt;a href="https://hankchiu.tw/writings/free-claude-sonnet-4-for-local-development-setting-up-aider-with-cody/" rel="noopener noreferrer"&gt;https://hankchiu.tw/writings/free-claude-sonnet-4-for-local-development-setting-up-aider-with-cody/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Looking to experiment with cutting-edge AI models for coding without breaking the bank? While premium tools like Cursor and Windsurf offer polished experiences, they come with subscription costs that can add up. If you're curious about modern language models but want a budget-friendly way to explore their capabilities, combining Aider with Cody offers an interesting alternative worth considering.&lt;/p&gt;

&lt;p&gt;This approach isn't perfect – it comes with limitations and requires more setup than plug-and-play solutions. But for developers who enjoy tinkering and want to experience models like Claude Sonnet 4 without monthly fees, it's a compelling option to explore.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You're Working With
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Aider&lt;/strong&gt; is an open-source command-line tool designed for AI-assisted coding. Unlike IDE-integrated solutions, it works directly with git repositories, making commits based on natural language instructions. It supports various editors and focuses on making actual code changes rather than just providing suggestions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cody&lt;/strong&gt; is Sourcegraph's AI assistant that powers chat, code completion, and automated code edits. Its free plan includes 200 messages per month with premium models like Claude Sonnet 4. These limits mean this setup works best for occasional experimentation, learning, or small projects rather than heavy daily development. If you're planning to use AI assistance extensively, you'll likely hit these caps quickly. However, for exploring what modern language models can do or handling specific coding challenges, these quotas can be quite useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Aider with Cody's Cloud Endpoint
&lt;/h2&gt;

&lt;p&gt;The integration process involves connecting Aider directly to Sourcegraph's cloud API rather than running a local Cody CLI instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Aider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Follow the &lt;a href="https://aider.chat/docs/install.html" rel="noopener noreferrer"&gt;official document&lt;/a&gt;, e.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; pip &lt;span class="nb"&gt;install &lt;/span&gt;aider-install
aider-install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Generate Your Access Token&lt;/strong&gt;&lt;br&gt;
Visit Sourcegraph's token settings page at &lt;code&gt;https://sourcegraph.com/users/&amp;lt;your-username&amp;gt;/settings/tokens&lt;/code&gt; and create a new access token. This token authenticates your requests to Sourcegraph's LLM API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Configure Your Project&lt;/strong&gt;&lt;br&gt;
In your project directory, create two configuration files that tell Aider how to communicate with Sourcegraph's endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .aider.conf.yml&lt;/span&gt;
&lt;span class="na"&gt;openai-api-base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://sourcegraph.com/.api/llm&lt;/span&gt;
&lt;span class="na"&gt;openai-api-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sgp_xxxxx&lt;/span&gt; &lt;span class="c1"&gt;# put your actual access token here&lt;/span&gt;
&lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openai/anthropic::2024-10-22::claude-sonnet-4-latest&lt;/span&gt;

&lt;span class="c1"&gt;# .aider.model.settings.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openai/anthropic::2024-10-22::claude-sonnet-4-latest&lt;/span&gt;
  &lt;span class="na"&gt;use_system_prompt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;extra_params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;extra_headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;X-Requested-With&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aider 0.0.0&lt;/span&gt;
      &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;64000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first file handles basic authentication and model selection, while the second configures specific parameters for optimal performance with Claude Sonnet 4, including the maximum token limit and required headers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Launch and Code&lt;/strong&gt;&lt;br&gt;
With your configuration in place, simply run &lt;code&gt;aider&lt;/code&gt; from your project directory. Aider will automatically use your settings to connect to Sourcegraph's API and give you access to Claude Sonnet 4 within your free usage limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is This Right for You?
&lt;/h2&gt;

&lt;p&gt;This setup works best for developers curious about modern AI models without subscription commitments – students, hobbyists, or those working on side projects. The quota limitations make it unsuitable for heavy daily use, but as an accessible entry point into AI-assisted development, it offers genuine learning value.&lt;/p&gt;

&lt;p&gt;Consider it a stepping stone to understand how AI can enhance your coding. Once you've explored the capabilities within these constraints, you can make informed decisions about investing in more comprehensive solutions.&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>openai</category>
      <category>aider</category>
    </item>
    <item>
      <title>The Paradigm Shift in Web Development for AI-Era: From Client-Side to Server-Side Rendering</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Wed, 04 Jun 2025 03:21:11 +0000</pubDate>
      <link>https://dev.to/hankchiutw/the-paradigm-shift-in-web-development-for-ai-era-from-client-side-to-server-side-rendering-1p38</link>
      <guid>https://dev.to/hankchiutw/the-paradigm-shift-in-web-development-for-ai-era-from-client-side-to-server-side-rendering-1p38</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Original post: &lt;a href="https://hankchiu.tw/writings/the-paradigm-shift-of-web-frameworks-in-ai-era-from-client-side-to-server-side-rendering/" rel="noopener noreferrer"&gt;https://hankchiu.tw/writings/the-paradigm-shift-of-web-frameworks-in-ai-era-from-client-side-to-server-side-rendering/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The following article, written with AI assistance, explores this topic.&lt;br&gt;
My initial ideas were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A brief history of the rise of Single-Page Applications (SPAs).&lt;/li&gt;
&lt;li&gt;Engineering challenges of client-side rendering for AI-native applications.&lt;/li&gt;
&lt;li&gt;How server-side rendering can address these challenges.&lt;/li&gt;
&lt;li&gt;Practical tools and libraries for further exploration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enjoy the read!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;The web development landscape is experiencing a fundamental transformation. As artificial intelligence becomes deeply integrated into web applications, we're witnessing a significant shift away from the client-side rendering dominance that defined the 2010s.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Rise and Reign of Single Page Applications
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The SPA Revolution (2010-2020)
&lt;/h3&gt;

&lt;p&gt;The Single Page Application era began with frameworks like Angular (2010), React (2013), and Vue.js (2014) promising desktop-like experiences in the browser. SPAs offered fluid user experiences with no page refreshes, rich interactivity, and clean separation between frontend and backend.&lt;/p&gt;

&lt;p&gt;By the mid-2010s, client-side rendering became the default choice. Several factors drove this adoption:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improved JavaScript engines made client-side computation viable&lt;/li&gt;
&lt;li&gt;CDN proliferation made delivering JavaScript bundles cost-effective&lt;/li&gt;
&lt;li&gt;Mobile hardware improvements provided sufficient processing power&lt;/li&gt;
&lt;li&gt;Broadband adoption reduced concerns about initial load times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result was a generation of developers who learned web development through React, Angular, and Vue. Client-side rendering became the cultural norm.&lt;/p&gt;
&lt;h2&gt;
  
  
  Engineering Challenges in the AI Era
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Real-Time Processing Challenges
&lt;/h3&gt;

&lt;p&gt;Modern AI applications demand capabilities that traditional SPAs struggle to deliver:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Overhead and Latency&lt;/strong&gt;&lt;br&gt;
AI applications require constant communication with servers for model updates, training data, or hybrid processing. This creates more network requests than traditional SPAs, ironically reducing the performance benefits that CSR was meant to provide. Real-time AI features like live translation, content generation, or computer vision processing suffer from network round-trip delays.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronization Complexity&lt;/strong&gt;&lt;br&gt;
AI applications frequently need to maintain state consistency across multiple AI services (embeddings, completions, fine-tuned models). Managing this distributed state on the client introduces significant complexity and potential for data inconsistencies, especially when handling real-time collaborative AI features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Processing Bottlenecks&lt;/strong&gt;&lt;br&gt;
Client devices, particularly mobile phones and budget laptops, lack the computational power for real-time AI processing. While servers can leverage specialized GPUs and TPUs, client-side AI inference creates noticeable delays and poor user experiences for time-sensitive applications.&lt;/p&gt;
&lt;h3&gt;
  
  
  Development and Maintenance Overhead
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Fragmentation Across Devices&lt;/strong&gt;&lt;br&gt;
Different devices have varying AI capabilities (Neural Processing Units, GPU acceleration, WebGL support). Creating consistent AI experiences across this fragmented landscape requires substantial engineering effort. Developers must handle graceful degradation, feature detection, and multiple code paths for different device capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version Management Complexity&lt;/strong&gt;&lt;br&gt;
AI models evolve rapidly with frequent updates and improvements. Managing model versions, backward compatibility, and deployment across diverse client devices becomes exponentially more complex than traditional web application updates. Each client potentially runs different model versions, creating support nightmares.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;&lt;br&gt;
Client-side AI applications must carefully manage memory usage, processing threads, and battery consumption. This adds significant complexity to the development process, requiring specialized knowledge of device capabilities and performance optimization techniques that most web developers lack.&lt;/p&gt;
&lt;h2&gt;
  
  
  Server-Side Rendering: The AI-Era Solution
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Why SSR Makes Sense for AI Applications
&lt;/h3&gt;

&lt;p&gt;Server-side rendering addresses the fundamental misalignment between AI computational requirements and client device capabilities:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specialized Hardware&lt;/strong&gt;&lt;br&gt;
Servers utilize GPUs, TPUs, and specialized AI hardware that provide orders of magnitude better performance than client devices for AI workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistent Performance&lt;/strong&gt;&lt;br&gt;
Server-side AI processing provides predictable performance regardless of client device capabilities, ensuring all users receive the same high-quality experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplified Architecture&lt;/strong&gt;&lt;br&gt;
Centralized model deployment simplifies updates, A/B testing, and maintenance of AI capabilities while reducing client-side complexity.&lt;/p&gt;
&lt;h3&gt;
  
  
  Technical Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Initial Load Times&lt;/strong&gt;: Users receive pre-rendered HTML with AI-generated content already in place&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Security&lt;/strong&gt;: AI models and processing remain on the server, preventing model extraction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better SEO and Accessibility&lt;/strong&gt;: AI-generated content is immediately available to search engines and screen readers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Efficiency&lt;/strong&gt;: Server infrastructure allows efficient resource sharing across users&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Practical Tools for AI-Era SSR
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Next.js: Server Actions and Streaming
&lt;/h3&gt;

&lt;p&gt;Next.js leads the SSR renaissance with powerful AI features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Server Action for AI processing&lt;/span&gt;
&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;generateResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Server Actions for seamless AI processing&lt;/li&gt;
&lt;li&gt;Edge Runtime support for global distribution&lt;/li&gt;
&lt;li&gt;Built-in streaming for real-time AI responses&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SvelteKit: Performance-First Approach
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Pre-process AI data before rendering&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userPreferences&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getUserPreferences&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;aiRecommendations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;generateRecommendations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userPreferences&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;recommendations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;aiRecommendations&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimal JavaScript footprint&lt;/li&gt;
&lt;li&gt;Server-side load functions for AI pre-processing&lt;/li&gt;
&lt;li&gt;Excellent performance characteristics&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Specialized AI Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Vercel AI SDK&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;streamText&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;openai&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@ai-sdk/openai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;POST&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;streamText&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gpt-4&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toAIStreamResponse&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Infrastructure Options:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vercel Edge Functions&lt;/strong&gt;: Global AI processing distribution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloudflare Workers&lt;/strong&gt;: Low-latency AI inference at the edge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Serverless AI processing with AWS integration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Caching Strategies
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt;: Cache AI responses and user sessions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDN Caching&lt;/strong&gt;: Static AI-generated content with proper headers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge Caching&lt;/strong&gt;: Distribute AI-processed content globally&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Hybrid Future
&lt;/h2&gt;

&lt;p&gt;The future involves sophisticated hybrid approaches:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smart Rendering Decisions&lt;/strong&gt;&lt;br&gt;
Frameworks will automatically decide where to render based on content type, device capabilities, network conditions, and AI processing requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Progressive AI Enhancement&lt;/strong&gt;&lt;br&gt;
Applications will layer AI capabilities progressively, ensuring core functionality works universally while enhancing experiences where possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The shift toward server-side rendering represents a maturation of web development practices in response to AI requirements. As AI becomes central to web applications, computational realities demand server-centric architectures.&lt;/p&gt;

&lt;p&gt;This evolution incorporates lessons from the SPA era while addressing AI-native application challenges. The tools and frameworks are ready—the question is how quickly development teams will adapt to leverage AI-era server-side rendering benefits.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>frontend</category>
      <category>backend</category>
    </item>
    <item>
      <title>Copilot Proxy: Your Free LLM API for Local Development</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Tue, 13 May 2025 03:48:49 +0000</pubDate>
      <link>https://dev.to/hankchiutw/copilot-proxy-your-free-llm-api-for-local-development-3c07</link>
      <guid>https://dev.to/hankchiutw/copilot-proxy-your-free-llm-api-for-local-development-3c07</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Original post: &lt;a href="https://hankchiu.tw/writings/copilot-proxy-your-free-llm-api-for-local-development" rel="noopener noreferrer"&gt;https://hankchiu.tw/writings/copilot-proxy-your-free-llm-api-for-local-development&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Developing applications powered by Large Language Models (LLMs) can be costly, especially during the development phase. Each API call to services like OpenAI or Anthropic consumes tokens, and these costs can accumulate rapidly during iterative development and debugging.&lt;/p&gt;

&lt;p&gt;I developed &lt;a href="https://www.npmjs.com/package/copilot-proxy" rel="noopener noreferrer"&gt;Copilot Proxy&lt;/a&gt; to address this issue. It's a local API proxy that routes your LLM requests to GitHub Copilot, maximizing your free quota usage and minimizing your API costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Existing Alternatives and Their Limitations
&lt;/h2&gt;

&lt;p&gt;Some developers turn to local solutions like &lt;strong&gt;Ollama&lt;/strong&gt;, which allows running open-source models such as LLaMA and Mistral locally. While this approach offers privacy and cost benefits, it comes with certain limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hardware Requirements&lt;/strong&gt;: Running these models efficiently requires above-average hardware, such as modern Apple Silicon or high-end GPUs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Availability&lt;/strong&gt;: Ollama primarily supports open-source models. Mainstream models like GPT-4, Claude, or Gemini are not available through this platform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Variability&lt;/strong&gt;: The performance and quality of open-source models can be inconsistent compared to their proprietary counterparts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Management Overhead&lt;/strong&gt;: Handling model downloads and dependencies can be cumbersome.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Copilot Proxy's Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seamless API Proxying&lt;/strong&gt;: Transparently routes your OpenAI-compatible API requests to &lt;code&gt;https://api.githubcopilot.com&lt;/code&gt;, allowing you to use GitHub Copilot as a drop-in replacement for expensive LLM APIs during development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supported Endpoints&lt;/strong&gt;: Handles key endpoints such as &lt;code&gt;/chat/completions&lt;/code&gt; for conversational AI and &lt;code&gt;/models&lt;/code&gt; for model discovery, ensuring compatibility with most OpenAI-based tools and SDKs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intuitive Admin UI&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Authentication&lt;/strong&gt;: Securely log in with your GitHub account to generate and manage Copilot tokens directly from the interface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual Token Management&lt;/strong&gt;: Easily add, remove, or update tokens as needed, giving you full control over your Copilot access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Token Support&lt;/strong&gt;: Manage several tokens at once, allowing you to distribute requests across them and make the most of your available free quota.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usage Analytics&lt;/strong&gt;: Visualize chat message and code completion statistics to monitor your development activity and optimize token utilization.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ideal Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Developing with frameworks like &lt;a href="https://python.langchain.com/" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt; or &lt;a href="https://www.llamaindex.ai/" rel="noopener noreferrer"&gt;LlamaIndex&lt;/a&gt; to prototype and test LLM-powered workflows without incurring API costs.&lt;/li&gt;
&lt;li&gt;Using the &lt;a href="https://llm.datasette.io/en/stable/other-models.html#openai-compatible-models" rel="noopener noreferrer"&gt;LLM CLI&lt;/a&gt; for your daily tasks, such as generating commit messages or summarizing code changes.&lt;/li&gt;
&lt;li&gt;Chatting with GitHub Copilot through &lt;a href="https://docs.openwebui.com/getting-started/" rel="noopener noreferrer"&gt;Open WebUI&lt;/a&gt; outside VSCode.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Not a Replacement for Production APIs
&lt;/h3&gt;

&lt;p&gt;While Copilot Proxy is excellent for development purposes, it's not intended for production use. Copilot doesn't support features like function calls, tools, or streaming outputs that are available in full-fledged APIs. However, for local testing and development cycles, it serves as a cost-effective solution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;If you're building LLM-powered applications and want to optimize your development process without incurring high costs, give Copilot Proxy a try.&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>openai</category>
      <category>proxy</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Lazy load AngularJS app from modern Angular project</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Wed, 22 Jan 2025 14:31:54 +0000</pubDate>
      <link>https://dev.to/hankchiutw/lazy-load-angularjs-app-from-modern-angular-project-2f6e</link>
      <guid>https://dev.to/hankchiutw/lazy-load-angularjs-app-from-modern-angular-project-2f6e</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;AngularJS was deprecated in 2018 and the Angular team has been working on Angular 2+ since 2016 (currently Angular v19!). AngularJS is still widely used in many projects and it is not easy to migrate to Angular 2+ in a short time.&lt;br&gt;
I proposed a technique in this post to demonstrate lazy-loading AngularJS apps from modern Angular projects without changing the existing AngularJS code too much.&lt;/p&gt;
&lt;h2&gt;
  
  
  Rationale
&lt;/h2&gt;

&lt;p&gt;While Angular officially provides &lt;code&gt;@angular/upgrade&lt;/code&gt;, it suggests a progressive migration approach. However, a medium to large project often leaves an intermediate state in business logic scattered between AngularJS and Angular 2+. It is not easy to maintain and test the code in such a state.&lt;br&gt;
Lazy-loading is a technique to connect the whole legacy logic as is, and gradually migrate the code to Angular 2+ later.&lt;/p&gt;
&lt;h2&gt;
  
  
  Show me the code
&lt;/h2&gt;

&lt;p&gt;If you are in a hurry, you can check the code in this repository: &lt;a href="https://github.com/hankchiutw/ng1-migration-example" rel="noopener noreferrer"&gt;https://github.com/hankchiutw/ng1-migration-example&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How it works in a nutshell
&lt;/h2&gt;

&lt;p&gt;The key is to bootstrap the AngularJS app manually by &lt;code&gt;angular.bootstrap&lt;/code&gt;, instead of using &lt;code&gt;ng-app&lt;/code&gt; in the template.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Wrap main AngularJS code in controllers and directives
&lt;/h3&gt;

&lt;p&gt;Your Angular project will render the AngularJS directive. Here is a minimal example of AngularJS code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ng1Module&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;angular&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;module&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myApp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;
&lt;span class="nx"&gt;angular&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;module&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myApp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;directive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myAppRoot&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MainController&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`my AngularJS app`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tips&lt;/strong&gt;: You can set &lt;code&gt;loader&lt;/code&gt; option in &lt;code&gt;angular.json&lt;/code&gt; to separate the template as a html file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;angular.json&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"builder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@angular-devkit/build-angular:application"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"loader"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;".html"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"text"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./ng1-app.html&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;angular&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;module&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myApp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;directive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myAppRoot&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MainController&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Bootstrap AngularJS app in Angular project
&lt;/h3&gt;

&lt;p&gt;A reasonable place to bootstrap AngularJS app is in &lt;code&gt;ngOnInit&lt;/code&gt; of Angular component. The key is to ensure the AngularJS module is loaded before bootstrapping the AngularJS app. Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ng1Module&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./ng1-app/ng1-app&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;ngOnInit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;angular&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bootstrap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;elRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nativeElement&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ng1Module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your AngularJS code is really fat, you can use &lt;code&gt;import()&lt;/code&gt; to load the AngularJS code dynamically. Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;ngOnInit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./ng1-app/ng1-app&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(({&lt;/span&gt; &lt;span class="nx"&gt;ng1Module&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;angular&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bootstrap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;elRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nativeElement&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ng1Module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Lazy-loading gives you the flexibility of choosing different migration strategies. You can migrate the whole AngularJS app to Angular 2+ at once, or migrate feature by feature. It is a good way to keep the project maintainable and testable during the migration process.&lt;/p&gt;




&lt;p&gt;Practically, your AngularJS app may have more complex logic and dependencies. You may need to adjust the code to fit your project. However, the basic idea is the same. I hope this post can help you to migrate your AngularJS project to Angular 2+ more smoothly.&lt;br&gt;
If you have any specific use cases want to discuss(e.g. use AngularJS and Angular routing together), feel free to leave a comment below. I am happy to discuss with you. Thank you for reading!&lt;/p&gt;

</description>
      <category>angular</category>
      <category>webdev</category>
      <category>refactoring</category>
    </item>
    <item>
      <title>Copy as Link: A simple Chrome extension that copies the selected text as a link to the current page's URL</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Mon, 04 Nov 2024 15:31:08 +0000</pubDate>
      <link>https://dev.to/hankchiutw/copy-as-link-a-simple-chrome-extension-that-copies-the-selected-text-as-a-link-to-the-current-pages-url-1kbm</link>
      <guid>https://dev.to/hankchiutw/copy-as-link-a-simple-chrome-extension-that-copies-the-selected-text-as-a-link-to-the-current-pages-url-1kbm</guid>
      <description>&lt;p&gt;Show empathy for your readers(or yourself) by always sharing a link with a readable title, instead of a plain  &lt;code&gt;https://...&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That's why I created &lt;a href="https://copy-as-link.vercel.app/" rel="noopener noreferrer"&gt;Copy as Link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Whether you're:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collecting research materials&lt;/li&gt;
&lt;li&gt;Sharing interesting quotes with colleagues&lt;/li&gt;
&lt;li&gt;Building a knowledge base&lt;/li&gt;
&lt;li&gt;Creating documentation with references&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This extension seamlessly integrates into your browsing experience, making link creation as natural as highlighting text.&lt;/p&gt;

&lt;p&gt;The implementation is straightforward, built with Plasmo and React. Feel free to check out the &lt;a href="https://github.com/hankchiutw/copy-as-link" rel="noopener noreferrer"&gt;source code&lt;/a&gt; if you're interested.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>extensions</category>
    </item>
    <item>
      <title>You may not need ngOnChanges</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Tue, 19 Jan 2021 00:29:52 +0000</pubDate>
      <link>https://dev.to/hankchiutw/you-may-not-need-ngonchanges-40h6</link>
      <guid>https://dev.to/hankchiutw/you-may-not-need-ngonchanges-40h6</guid>
      <description>&lt;p&gt;"ngOnChanges" is a lifecycle hook for an Angular component to know when the @Input props are changed. The main drawback of using ngOnChanges is that you have to write much more code to watch a single prop.&lt;/p&gt;

&lt;p&gt;Angular team also provides another way to &lt;a href="https://angular.io/guide/component-interaction#intercept-input-property-changes-with-a-setter" rel="noopener noreferrer"&gt;intercept the property changes by setter&lt;/a&gt;. If you use the setter technique naively you would find it tediously to write the getter/setter pair and the redundant private variable.&lt;/p&gt;

&lt;p&gt;In this article, I would like to share how I improve the setter technique into an npm module - &lt;a href="https://www.npmjs.com/package/subjectize" rel="noopener noreferrer"&gt;subjectize&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Usage
&lt;/h4&gt;

&lt;p&gt;Say we are building a counter component and would like to do something whenever the count changes. We could have 3 versions of implementation like below(excerpt):&lt;/p&gt;

&lt;p&gt;1) By ngOnChanges&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CounterComponent&lt;/span&gt; &lt;span class="kr"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;OnChanges&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Input&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;ngOnChanges&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;changes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SimpleChanges&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;changes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// do something&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) By naive setter&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CounterComponent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Input&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nf"&gt;count&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_count&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="nf"&gt;count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;// do something&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kr"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;_count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) By Subjectize&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CounterComponent&lt;/span&gt; &lt;span class="kr"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;OnInit&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Input&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Subjectize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;count&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;count$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReplaySubject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nf"&gt;ngOnInit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count$&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// do something&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They may look fair in such a simple scenario, but things go differently when you have a few Input props to watch. For ngOnChanges, you got lots if logics. For naive setter, you got many boring private variables.&lt;/p&gt;

&lt;p&gt;The Subjectize is also a mentally-direct approach: declare a RxJS Subject and subscribe to the Subject for changes, that's it.&lt;/p&gt;

&lt;h4&gt;
  
  
  The magics
&lt;/h4&gt;

&lt;p&gt;The Subjectize is a TypeScript property decorator. Under the hood, it creates an internal getter/setter for the specified Input prop, just like the naive setter implementation. The Subjectize itself only depends on RxJS, hence you can use it on any ES6 class without Angular. You could also use it for simple state management.&lt;/p&gt;

&lt;p&gt;Without saying, there are more things to do to keep the reliabilities. If you are interested, see the &lt;a href="https://github.com/hankchiutw/monorepo/tree/main/libs/subjectize" rel="noopener noreferrer"&gt;source code&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;JavaScript getter/setter can be used to watch Input props and &lt;a href="https://www.npmjs.com/package/subjectize" rel="noopener noreferrer"&gt;subjectize&lt;/a&gt; helps you to do that. If you just got fed up with ngOnChanges, give &lt;a href="https://www.npmjs.com/package/subjectize" rel="noopener noreferrer"&gt;subjectize&lt;/a&gt; a try!&lt;/p&gt;

</description>
      <category>angular</category>
      <category>rxjs</category>
      <category>typescript</category>
      <category>decorator</category>
    </item>
    <item>
      <title>How do I create my first Chrome extension using TypeScript - PART 1</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Mon, 20 Jul 2020 22:36:51 +0000</pubDate>
      <link>https://dev.to/hankchiutw/how-do-i-create-my-first-chrome-extension-using-typescript-part-1-3pbf</link>
      <guid>https://dev.to/hankchiutw/how-do-i-create-my-first-chrome-extension-using-typescript-part-1-3pbf</guid>
      <description>&lt;p&gt;In this post, I would like to share my journey of creating &lt;a href="https://any-color.vercel.app" rel="noopener noreferrer"&gt;AnyColor&lt;/a&gt; - A Chrome extension that makes you pick up any pixel color from a web page.&lt;/p&gt;

&lt;h4&gt;
  
  
  Origin
&lt;/h4&gt;

&lt;p&gt;To be honest, there are already many color picker/eyedropper extensions on the Chrome Web Store. Why do I need to create another one?&lt;br&gt;
It's simple: none of them meet my simple requirement - the ability to pick up any color quickly when browsing.&lt;br&gt;
To name a few extensions I have tried:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://chrome.google.com/webstore/detail/eye-dropper/hmdcmlfkchdmnmnmheododdhjedfccka/" rel="noopener noreferrer"&gt;Eye Dropper&lt;/a&gt;
Almost what I need but somehow slow and I don't need the color palette. The UX design could be better.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://chrome.google.com/webstore/detail/colorpick-eyedropper/ohcpnigalekghcmgcdcenkpelffpdolg" rel="noopener noreferrer"&gt;ColorPick Eyedropper&lt;/a&gt;
Similar to the above, not so quick to invoke the picker.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://chrome.google.com/webstore/detail/colorzilla/bhlhnicpbhignbdhedgjhgdocnmhomnp" rel="noopener noreferrer"&gt;ColorZilla&lt;/a&gt;
This is a color picker for DOM elements. No way to pick colors from images on a web page.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although thousands of users use these extensions, they are just not what I want.&lt;/p&gt;

&lt;h4&gt;
  
  
  Idea
&lt;/h4&gt;

&lt;p&gt;I realized the main problem of these extensions is UX design and performance.&lt;br&gt;
As a front-end developer, &lt;a href="https://developers.google.com/web/tools/chrome-devtools/css/reference#eyedropper" rel="noopener noreferrer"&gt;eyedropper from chrome-devtool&lt;/a&gt; came into my mind. The neat UX design is already there, all I have to do is to implement it as a Chrome extension, instead of calling devtool on each web page. &lt;/p&gt;

&lt;h4&gt;
  
  
  Implementation
&lt;/h4&gt;

&lt;p&gt;This is my first time developing a Chrome extension. The official developer guide is clear but the API is not so modern comparing to other google's products. In a word, I implemented using TypeScript, Web Component, and HTML5 Canvas.&lt;br&gt;
I decide to leave the technical details in another post. The source code is public here &lt;a href="https://github.com/hankchiutw/any-color" rel="noopener noreferrer"&gt;hankchiutw/any-color&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Publishing tools for developers is fun and interesting. Give it a try (&lt;a href="https://any-color.vercel.app" rel="noopener noreferrer"&gt;AnyColor&lt;/a&gt;) and any feedback is appreciated!&lt;br&gt;
I also wonder how do you use a color picker/eyedropper extensions in your workflow? Share below!&lt;/p&gt;

</description>
      <category>webdev</category>
    </item>
    <item>
      <title>A simple technique to promisify Chrome extension API</title>
      <dc:creator>Hank Chiu</dc:creator>
      <pubDate>Sat, 18 Jul 2020 11:23:40 +0000</pubDate>
      <link>https://dev.to/hankchiutw/a-simple-technique-to-promisify-chrome-extension-api-1e0c</link>
      <guid>https://dev.to/hankchiutw/a-simple-technique-to-promisify-chrome-extension-api-1e0c</guid>
      <description>&lt;p&gt;&lt;em&gt;Update:&lt;/em&gt;&lt;br&gt;
Now you can use &lt;code&gt;toPromise&lt;/code&gt; from &lt;a href="https://github.com/hankchiutw/crx-esm#topromise" rel="noopener noreferrer"&gt;crx-esm&lt;/a&gt;!&lt;/p&gt;



&lt;p&gt;One of my pain when developing a Chrome extension is looking at the &lt;a href="https://developer.chrome.com/extensions/devguide" rel="noopener noreferrer"&gt;callback-based APIs&lt;/a&gt;. There are some polyfill promisified all the APIs. (e.g. &lt;a href="https://github.com/mozilla/webextension-polyfill#using-the-promise-based-apis" rel="noopener noreferrer"&gt;webextension-polyfill&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;If you just want some light-weight solution, here you are.&lt;/p&gt;

&lt;p&gt;The simple trick is to take advantage of the truth that the callback function is always the last argument, and you can create a simple helper function to promisify the chrome API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;toPromise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;api&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and use it like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;toPromise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chrome&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tabs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)({}).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(...);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This just works for me most of the time.&lt;/p&gt;

</description>
      <category>chrome</category>
      <category>extensions</category>
      <category>promise</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
