<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Studio1</title>
    <description>The latest articles on DEV Community by Studio1 (@studio1hq).</description>
    <link>https://dev.to/studio1hq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/studio1hq"/>
    <language>en</language>
    <item>
      <title>Production-Aware AI: Giving LLMs Real Debugging Context</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Thu, 09 Apr 2026 05:22:32 +0000</pubDate>
      <link>https://dev.to/studio1hq/production-aware-ai-giving-llms-real-debugging-context-187g</link>
      <guid>https://dev.to/studio1hq/production-aware-ai-giving-llms-real-debugging-context-187g</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Large language models struggle with production debugging because they do not have visibility into how code actually executes at runtime.&lt;/li&gt;
&lt;li&gt;Inputs such as logs, stack traces, and metrics provide incomplete signals, which often cause confident but incorrect conclusions about root causes.&lt;/li&gt;
&lt;li&gt;When AI reasoning is grounded in function-level runtime data collected from production systems, debugging becomes accurate, explainable, and reliable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Large language models are increasingly used by developers to understand code, analyze failures, and assist during incident response. In controlled environments, they are effective at explaining logic and suggesting fixes. In production systems, however, their usefulness often drops sharply.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://lokalise.com/blog/blog-the-developer-delay-report/" rel="noopener noreferrer"&gt;recent survey of developers&lt;/a&gt; found that a quarter of developers spend more time debugging than writing code each week. The same survey reported that bugs and tooling failures cost teams nearly 20 working days per year in lost productivity. These numbers reflect a reality most engineering teams already experience. &lt;/p&gt;

&lt;p&gt;Production debugging takes time because failures depend on runtime factors such as traffic patterns, concurrency, queue depth, and system state that are absent in non-production environments. Most AI systems do not observe these execution conditions. They analyze code structure and reported symptoms, rather than the runtime behavior that caused the failure.&lt;/p&gt;

&lt;p&gt;In this article, we will discuss why production context is critical for AI debugging, what production-aware AI really means, and how runtime intelligence enables more accurate and trustworthy debugging outcomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Production Issues Cannot Be Understood from Code Alone
&lt;/h2&gt;

&lt;p&gt;Code defines control flow and data handling, but production behavior is determined by runtime conditions such as traffic volume, concurrency, and system state.&lt;/p&gt;

&lt;p&gt;In production, requests arrive concurrently and compete for shared resources. As traffic increases, queues begin to accumulate work, caches evolve, and external dependencies respond with variable latency or partial failures. Together, these factors influence execution order, timing, and resource contention in ways that are not visible when reading code or running isolated tests.&lt;/p&gt;

&lt;p&gt;Many production failures arise only when specific runtime conditions are met. Race conditions appear under concurrent access. Performance regressions surface under sustained or uneven load. Retry mechanisms can magnify transient upstream failures into system-wide impact. In each case, the logic itself may be correct, while the observed failure is a result of how that logic behaves under real execution pressure.&lt;/p&gt;

&lt;p&gt;This leads to a common outcome during incident response. The code appears correct because the failure is not caused by a logical error. The root cause exists in how the code executes under real production conditions, not in how it reads in isolation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47gqpvmdldj288p0zzox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47gqpvmdldj288p0zzox.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How LLMs Debug Today: Strengths and Structural Limits
&lt;/h2&gt;

&lt;p&gt;Large language models assist debugging by analyzing text. They infer intent, recognize common patterns, and map symptoms to known classes of problems. This makes them effective for code review, error explanation, and reasoning about familiar failure modes.&lt;/p&gt;

&lt;p&gt;However, their understanding is entirely constrained by the inputs they receive. Without access to runtime execution data, their conclusions are based on probability rather than evidence.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;What LLMs Do Well&lt;/th&gt;
&lt;th&gt;Structural Limitation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Code understanding&lt;/td&gt;
&lt;td&gt;Explain logic, control flow, and common anti patterns&lt;/td&gt;
&lt;td&gt;Cannot observe how code executes under real load&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Input analysis&lt;/td&gt;
&lt;td&gt;Reason over logs, stack traces, and snippets&lt;/td&gt;
&lt;td&gt;Inputs represent symptoms, not full execution context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pattern matching&lt;/td&gt;
&lt;td&gt;Identify known bug patterns and typical fixes&lt;/td&gt;
&lt;td&gt;Fails when failures are novel or environment specific&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Root cause analysis&lt;/td&gt;
&lt;td&gt;Propose plausible explanations&lt;/td&gt;
&lt;td&gt;Cannot validate causality without runtime signals&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decision making&lt;/td&gt;
&lt;td&gt;Rank likely fixes based on training data&lt;/td&gt;
&lt;td&gt;Relies on probabilistic inference when facts are missing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Without visibility into execution order, timing, frequency, and state, LLMs are forced to guess. The results may sound correct, but they are not grounded in how the system actually behaved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hallucinations Are Caused by Missing Runtime Evidence
&lt;/h2&gt;

&lt;p&gt;Hallucinations in AI-assisted debugging usually appear when the system does not have enough information about what actually happened during execution. This is common in production, where AI is asked to explain failures using logs, stack traces, or small pieces of code that describe symptoms but not runtime behavior.&lt;/p&gt;

&lt;p&gt;Recent research on AI reliability shows that incorrect answers increase when important contextual details are missing. In debugging scenarios, these details include execution order, timing, system state, and how frequently specific code paths were executed. Without this information, AI systems infer causes based on likelihood rather than evidence.&lt;/p&gt;

&lt;p&gt;The same pattern appears in &lt;a href="https://arxiv.org/pdf/2505.04441" rel="noopener noreferrer"&gt;studies on AI-driven debugging and code repair&lt;/a&gt;. When models are given execution traces or feedback from real runs, fault localization and fix accuracy improve. When this runtime information is absent, models often produce explanations and fixes that appear reasonable but fail to address the real cause of the issue.&lt;/p&gt;

&lt;p&gt;Prompt refinement does not address this limitation. Clearer prompts help structure responses, but they do not introduce new facts. If execution data is missing, the model still reasons without evidence about how the system behaved.&lt;/p&gt;

&lt;p&gt;In production debugging, hallucinations are therefore expected. They occur when AI systems are asked to explain failures they cannot observe, not because the reasoning process is flawed, but because the necessary runtime evidence is absent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Missing Context in AI Debugging Workflows
&lt;/h2&gt;

&lt;p&gt;Most AI debugging workflows rely on the same signals engineers have used for years. These signals are useful, but they describe outcomes, not execution, which creates a gap between what failed and why it failed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What AI usually receives today&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Logs:&lt;/strong&gt; Logs capture messages emitted by code paths that were explicitly instrumented. They are selective, often incomplete, and rarely reflect execution order, frequency, or timing across concurrent requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stack traces:&lt;/strong&gt; Stack traces show where an error surfaced, not how the system reached that state. They lack information about prior execution paths, state changes, and interactions with other components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics:&lt;/strong&gt; Metrics summarize system behavior at an aggregate level. They indicate that something is slow or failing, but they do not identify which functions caused the issue or how behavior changed over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is missing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Function level execution behavior:&lt;/strong&gt; Which functions ran, how often they executed, and how long they took under real load conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime performance characteristics:&lt;/strong&gt; Execution timing, concurrency effects, retries, and resource contention that emerge only during live operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection between user impact and code:&lt;/strong&gt; Clear linkage between affected endpoints or workflows and the exact functions responsible for the observed behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When AI reasons over incomplete signals, it cannot establish causality. Proposed fixes are derived from statistical patterns rather than observed execution, which often results in changes that compile or deploy successfully but do not resolve the underlying issue. Effective debugging requires visibility into execution behavior, not only error reports or surface-level symptoms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh59kapx7jr4l0k42ond.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh59kapx7jr4l0k42ond.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining Production-Aware AI
&lt;/h2&gt;

&lt;p&gt;Consider a common production incident. An API endpoint becomes slow after a deployment. Logs show no errors. Metrics show increased latency. The code itself looks unchanged or correct. An AI system reviewing this information can suggest several possible causes, such as a database query, a cache miss, or an external dependency. Each suggestion sounds reasonable, but none is confirmed.&lt;/p&gt;

&lt;p&gt;This is where production awareness matters. A production-aware AI does not rely only on aggregated metrics or isolated log lines. It reasons using information about how the system actually executed under real traffic. It can see which functions ran more often than before, where execution time increased, and which code paths were exercised during the slowdown.&lt;/p&gt;

&lt;p&gt;Production-aware AI is defined by the context it uses. It grounds reasoning in runtime behavior rather than static structure. It focuses on how functions are executed, how often they ran, and how their performance changes over time, instead of relying only on what the code looks like or what developers expect it to do.&lt;/p&gt;

&lt;p&gt;This approach changes the quality of debugging. Instead of proposing likely explanations, the AI reasons from observed execution evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Function-Level Runtime Intelligence Changes AI Debugging
&lt;/h2&gt;

&lt;p&gt;Function-level runtime intelligence gives AI direct visibility into how software behaves while it is running. This visibility changes debugging from interpreting symptoms to analyzing execution.&lt;/p&gt;

&lt;p&gt;Instead of inferring behavior from secondary signals, AI can reason using execution facts collected in real time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Function-level data as the missing signal:&lt;/strong&gt; Function-level data shows which functions executed, how frequently they ran, and how long they took under real load. This information allows AI to identify abnormal behavior at the exact point where performance or correctness changed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linking endpoints to execution paths:&lt;/strong&gt; Runtime intelligence connects external symptoms to internal execution. When an HTTP endpoint slows down, or a queue backs up, AI can trace the issue to the specific functions involved, rather than reasoning only at the service or request level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal awareness across deployments:&lt;/strong&gt; By comparing runtime behavior before and after a deployment, AI can identify which functions changed execution characteristics. This makes regressions visible without relying on alerts or manual comparison.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Hud Enables Production-Aware AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1layxsapduf33orzdqxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1layxsapduf33orzdqxh.png" alt="Image3" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.hud.io/" rel="noopener noreferrer"&gt;Hud&lt;/a&gt; captures function-level execution behavior directly from production systems. Instead of relying on aggregated metrics, sampled traces, or predefined alert rules, it observes how individual functions execute under real traffic, including errors and performance changes. &lt;/p&gt;

&lt;p&gt;This execution data can be consumed directly by engineers and AI systems to reason about production behavior based on observed runtime evidence.&lt;/p&gt;

&lt;p&gt;Below are the core capabilities that allow Hud to provide production-aware runtime context for AI-assisted debugging.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runtime code sensing at the function level:&lt;/strong&gt; &lt;a href="https://docs.hud.io/docs/installation-guide" rel="noopener noreferrer"&gt;Hud acts as a runtime code sensor&lt;/a&gt;. You get continuous function-level execution data from production, without manual instrumentation or ongoing maintenance. This data reflects how code actually runs under real traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic detection of errors and slowdowns:&lt;/strong&gt; Hud automatically detects errors and performance degradations based on changes in runtime behavior, not static rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linking user impact to code:&lt;/strong&gt; When an endpoint slows down, or a queue backs up, Hud connects that business-level symptom directly to the functions responsible. You can see which parts of the code caused the impact, not just where it surfaced.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-deployment behavior comparison:&lt;/strong&gt; Hud automatically detects deployments and compares function behavior across versions. You can see what changed in production after a release and identify regressions without manual diffing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime context for AI debugging:&lt;/strong&gt; Hud provides a full forensic runtime context that you can use inside the IDE or pass to &lt;a href="https://docs.hud.io/docs/hud-mcp-server" rel="noopener noreferrer"&gt;AI agents through its MCP server&lt;/a&gt;. This allows AI to reason from execution evidence instead of guessing from partial signals.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/JoOhI6QF6Zs"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Without visibility into how code actually ran in production, AI systems reason over symptoms instead of causes, which leads to incorrect or incomplete fixes. Production systems demand runtime grounded reasoning, where function-level behavior, execution timing, and real traffic conditions are first-class inputs. &lt;/p&gt;

&lt;p&gt;When AI is given this level of visibility, hallucination decreases, and confidence aligns with correctness. Production-aware AI is therefore not an optimization, but a requirement for reliable debugging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.hud.io/docs/what-you-can-do-with-hud" rel="noopener noreferrer"&gt;Hud&lt;/a&gt; gives you function-level runtime visibility directly from production, with no configuration and no maintenance. Explore &lt;a href="https://www.hud.io/" rel="noopener noreferrer"&gt;how Hud works&lt;/a&gt;, &lt;a href="https://docs.hud.io/" rel="noopener noreferrer"&gt;read the documentation&lt;/a&gt;, or &lt;a href="https://www.hud.io/book-a-demo/" rel="noopener noreferrer"&gt;book a demo&lt;/a&gt; to see how production-aware debugging changes the way you and your AI systems understand failures.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>mcp</category>
      <category>llm</category>
    </item>
    <item>
      <title>Build a Semantic Movie Discovery App with Claude Code and Weaviate Agent Skills</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Fri, 27 Mar 2026 20:45:45 +0000</pubDate>
      <link>https://dev.to/studio1hq/build-a-semantic-movie-discovery-app-with-claude-code-and-weaviate-agent-skills-30gd</link>
      <guid>https://dev.to/studio1hq/build-a-semantic-movie-discovery-app-with-claude-code-and-weaviate-agent-skills-30gd</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Versatility in agentic coding is increasing as new tools such as Model Context Protocol (MCP) servers and Agent Skills become more common. At the same time, many developers ask the same question when building AI applications: should they use MCP servers or Agent Skills? The important thing is understanding what each approach does well and choosing the one that fits your use case.&lt;/p&gt;

&lt;p&gt;In this post, we’ll explain what MCP servers and Agent Skills are and how they differ, including architecture diagrams and technical details. In the later sections, we’ll also walk through how to use &lt;a href="https://github.com/weaviate/agent-skills" rel="noopener noreferrer"&gt;Weaviate Agent Skills&lt;/a&gt; with &lt;a href="https://code.claude.com/docs/en/overview" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; to build a “Semantic Movie Discovery” application with several useful features.&lt;/p&gt;

&lt;p&gt;Let’s get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding MCP
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; (MCP) is an open standard introduced by Anthropic that enables Large Language Models (LLMs) to interact with external systems such as data sources, APIs and services. MCP provides a structured way for an &lt;a href="https://weaviate.io/agentic-ai" rel="noopener noreferrer"&gt;AI agent&lt;/a&gt; to connect to compliant tools through a single interface instead of requiring custom integrations for each service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqfus3ya7jofj8kchzml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqfus3ya7jofj8kchzml.png" alt="MCP Architecture " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Architecture
&lt;/h3&gt;

&lt;p&gt;The MCP system operates on a client–server model and consists of three main components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Host:&lt;/strong&gt; the application that runs the AI model and provides the environment where the agent operates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client:&lt;/strong&gt; the protocol connector inside the host that handles communication between the model and MCP servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server:&lt;/strong&gt; an external service that exposes tools, resources, or prompts that the agent can access.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  MCP and Agentic Coding
&lt;/h3&gt;

&lt;p&gt;Before MCP, each AI tool required custom integrations for every external service it wanted to connect to. MCP simplifies this process by introducing a shared protocol that multiple agents and tools can use.&lt;/p&gt;

&lt;p&gt;Developers can now expose capabilities through an MCP server once and allow any compatible agent to access them without building separate integrations for each system.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding Agent Skills&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt;, also introduced by Anthropic, provide developers with a simple way to extend AI coding agents without running MCP servers. An Agent Skill is a structured configuration file, usually written as markdown files with YAML metadata that defines capabilities, parameter schemas and natural-language instructions describing how the agent should use those capabilities.&lt;/p&gt;

&lt;p&gt;AI tools such as Claude Code read these files at session start and load the skills directly into the agent's working context without requiring an additional runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn13awyixqnmfnllmjlld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn13awyixqnmfnllmjlld.png" alt="Agent Skills with an AI tool (Claude Code)" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How Agent Skills Work
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When Claude Code detects a skill file in the project directory (typically under &lt;code&gt;.claude/skills/&lt;/code&gt;), it loads the manifest into the agent's context at the beginning of the session.&lt;/li&gt;
&lt;li&gt;The skill definition describes available capabilities, how to invoke them correctly and when to prefer one approach over another. Because the instructions are written in natural language alongside parameter schemas, the agent can reason about how to use the skill.&lt;/li&gt;
&lt;li&gt;Skills are portable across repositories. If a developer commits a skill file to a repository, any collaborator who clones the project and opens it in Claude Code automatically gains access to the same capabilities without additional setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MCP and Agent Skills solve different problems in agent systems. MCP provides a standardized way for AI agents to connect to external tools, APIs, databases and services through a client–server architecture with structured schemas. Agent Skills extend the agent’s capabilities through configuration files that define workflows, instructions and parameter schemas without requiring a running server.&lt;/p&gt;

&lt;p&gt;In simple terms, &lt;strong&gt;MCP enables agents to access external systems, while Agent Skills define how agents perform tasks or workflows within their environment.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Weaviate Agent Skills
&lt;/h2&gt;

&lt;p&gt;Weaviate has released an official set of &lt;a href="https://github.com/weaviate/agent-skills" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt; designed for use with Claude Code and other compatible agent-based development environments like Cursor, Antigravity, Windsurf and more. These skills provide structured access to Weaviate vector databases, allowing agents to perform common operations such as search, querying, schema inspection, data exploration and collection management.&lt;/p&gt;

&lt;p&gt;The repository includes ready-to-use skill definitions for tasks like semantic, hybrid and keyword search, along with natural language querying through the Query Agent. It also supports workflows such as creating collections, importing data and fetching filtered results, and cookbooks. This enables agents to interact/build with Weaviate and perform multi-step retrieval and agentic tasks more effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgiqyrgy3vpbq0xxz5ej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgiqyrgy3vpbq0xxz5ej.png" alt="Weaviate Ecosystem Tools and Features" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent Skills and Vector Databases
&lt;/h2&gt;

&lt;p&gt;AI coding agents face difficulties when working with vector databases. Vector database APIs provide extensive capabilities, including basic “key–value” retrieval, single-vector near-text searches, multimodal near-image searches, hybrid BM25-plus-vector search, generative modules and multi-tenant system support. Without structured guidance, even a capable coding agent may produce suboptimal queries: correct syntax but the wrong search strategy, missing parameters or failure to use powerful features like the Weaviate Query Agent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://weaviate.io/blog/weaviate-agent-skills" rel="noopener noreferrer"&gt;Weaviate Agent Skills&lt;/a&gt; address this by providing correct usage patterns, parameter recommendations and decision logic, enabling coding agents to generate production-ready code from their initial attempts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Weaviate Agent Skills repository is organized into two main parts&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facdcuqk3n68wemqdz6hj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facdcuqk3n68wemqdz6hj.png" alt="Overview of Weaviate Agent Skills" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate&lt;/strong&gt; 𝗦𝗸𝗶𝗹𝗹 (skills/weaviate): Focused scripts for tasks such as schema inspection, data ingestion and vector search. Agents use these while writing application logic or backend code.&lt;/li&gt;
&lt;li&gt;𝗖𝗼𝗼𝗸𝗯𝗼𝗼𝗸𝘀 &lt;strong&gt;Skill&lt;/strong&gt; (skills/weaviate-cookbooks): End-to-end project examples that combine tools such as FastAPI, Next.js and Weaviate to demonstrate full application workflows.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Weaviate Agent Skills work with several development environments, including Claude Code, Cursor, GitHub Copilot, VS Code and Gemini CLI. When connected to a Weaviate Cloud instance, agents can directly interact with database modules and perform search, data management and retrieval tasks.&lt;/p&gt;

&lt;p&gt;To evaluate how effective Weaviate Agent Skills really are, let’s build a small project and see how they accelerate RAG and agentic application development with Claude Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Semantic Movie Discovery Application
&lt;/h2&gt;

&lt;p&gt;We will build a &lt;strong&gt;Movie Discovery App&lt;/strong&gt; that takes a natural-language description and returns the most semantically similar movies from a Weaviate collection. In the process, we will explore Weaviate capabilities such as multimodal storage, named vector search, generative AI (RAG) and the Query Agent in action with Claude Code, showing how these Agentic tools help you build applications faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;Python 3.10&lt;/a&gt; or higher&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.weaviate.io/weaviate/quickstart" rel="noopener noreferrer"&gt;Weaviate Cloud&lt;/a&gt; – Create a free cluster and obtain an API key.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.themoviedb.org/" rel="noopener noreferrer"&gt;TMDB API key&lt;/a&gt; – Used to fetch movie metadata&lt;/li&gt;
&lt;li&gt;OpenAI API key – Required for &lt;a href="https://weaviate.io/rag" rel="noopener noreferrer"&gt;RAG&lt;/a&gt; features.&lt;/li&gt;
&lt;li&gt;Access to &lt;a href="https://code.claude.com/docs/en/quickstart" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/download" rel="noopener noreferrer"&gt;Node.js 18+&lt;/a&gt; and npm – Required to run the Next.js frontend&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Project Setup
&lt;/h3&gt;

&lt;p&gt;Create a &lt;strong&gt;movie-discovery-app&lt;/strong&gt; folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;mkdir&lt;/span&gt; &lt;span class="n"&gt;movie&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;discovery&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and activate a  &lt;strong&gt;Python virtual environment&lt;/strong&gt; in the folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;movie-discovery-app py &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;source &lt;/span&gt;venv&lt;span class="se"&gt;\S&lt;/span&gt;cripts&lt;span class="se"&gt;\a&lt;/span&gt;ctivate.bat 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Python dependencies&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;weaviate-client&lt;span class="o"&gt;==&lt;/span&gt;4.20.1 fastapi uvicorn[standard] openai weaviate-agents&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;1.3.0 requests python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Node.js dependencies for the frontend&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;frontend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create a &lt;code&gt;.env&lt;/code&gt; file at the project root. Add the following parameters to configure &lt;strong&gt;Weaviate Agent Skills with Claude Code&lt;/strong&gt;, along with your &lt;strong&gt;OpenAI API key&lt;/strong&gt; and &lt;strong&gt;TMDB API key&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;WEAVIATE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;without&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;https&lt;/span&gt;
&lt;span class="n"&gt;WEAVIATE_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;
&lt;span class="n"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;
&lt;span class="n"&gt;TMDB&lt;/span&gt; &lt;span class="n"&gt;API&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tmdb&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After signing up for Weaviate, click the &lt;strong&gt;Create Cluster&lt;/strong&gt; button to start a new cluster for your use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo4cx6bxr7o7xkbqyu1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo4cx6bxr7o7xkbqyu1j.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;“How to Connect”&lt;/strong&gt; to view the required Weaviate connection parameters.&lt;/p&gt;

&lt;p&gt;Now that everything is set up, we can connect Weaviate Cloud with &lt;strong&gt;Claude Code&lt;/strong&gt; by running &lt;code&gt;claude&lt;/code&gt; in your project terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9y0xh1tmthf9gp5hilm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9y0xh1tmthf9gp5hilm.png" alt="Claude Code screnshot" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the following prompt in your Claude terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Write and run &lt;span class="sb"&gt;`check_modules.py`&lt;/span&gt; that connects using &lt;span class="sb"&gt;`weaviate.connect_to_weaviate_cloud`&lt;/span&gt;with &lt;span class="sb"&gt;`skip_init_checks=True`&lt;/span&gt;, loads credentials from &lt;span class="sb"&gt;`.env`&lt;/span&gt; with &lt;span class="sb"&gt;`python-dotenv`&lt;/span&gt;,
and prints the full JSON list of enabled Weaviate modules.
Run it with &lt;span class="sb"&gt;`venv/Scripts/python check_modules.py`&lt;/span&gt;."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create A Weaviate Collection and Import Sample Movie Data
&lt;/h3&gt;

&lt;p&gt;In this step, we create a Weaviate collection and import the movie dataset into Weaviate.  The dataset contains movie metadata sourced from the TMDB API. Each entry includes: &lt;em&gt;title, overview, release_date, poster_url, popularity, and other important movie fields&lt;/em&gt;. You can import a JSON or CSV dataset directly into Weaviate.&lt;/p&gt;

&lt;p&gt;Run this prompt to retrieve the dataset from the TMDB API and save it to a file named &lt;em&gt;movies.json&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Create a TMDB dataset JSON file, movies.json, that contains 100 movie metadata and poster URLs directly from the TMDB API. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, &lt;a href="https://github.com/weaviate/agent-skills/blob/main/skills/weaviate/references/import_data.md" rel="noopener noreferrer"&gt;Weaviate Import Skills&lt;/a&gt; creates a Weaviate collection and imports the data from &lt;em&gt;movies.json&lt;/em&gt; into the Weaviate database. Claude code activates Weaviate to perform this action when prompted with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Import &lt;span class="sb"&gt;`movie.json`&lt;/span&gt; into a new Weaviate collection called Movie
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbeb2l8quvgqtbfmbzt7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbeb2l8quvgqtbfmbzt7.png" alt="Claude Code" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then the data is imported&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuihrumms8ofngypte6vi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuihrumms8ofngypte6vi.png" alt="Terminal Output" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Building the FastAPI Backend and Next.js Frontend with Weaviate Cookbooks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/weaviate/agent-skills/blob/main/skills/weaviate-cookbooks/references/frontend_interface.md" rel="noopener noreferrer"&gt;Weaviate cookbooks&lt;/a&gt; enable the app to use a two-layer architecture: a FastAPI backend that exposes REST endpoints and a Next.js frontend that renders the UI. The backend connects directly to Weaviate Cloud and the Weaviate Query Agent. Weaviate cookbooks also include some frontend guidelines to communicate with the &lt;a href="https://github.com/weaviate/agent-skills/blob/main/skills/weaviate-cookbooks/references/frontend_interface.md" rel="noopener noreferrer"&gt;Weaviate backend&lt;/a&gt; over HTTP.&lt;/p&gt;

&lt;p&gt;The app is organized into two views accessed via a collapsible sidebar:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Search view&lt;/strong&gt;: performs semantic search and RAG using Weaviate named vectors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat view&lt;/strong&gt;: handles multi-turn conversations through the Weaviate Query Agent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our app includes the following features:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Layer&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Component&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Role&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Backend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;backend.py (FastAPI) - REST API on port 8000/docs&lt;/td&gt;
&lt;td&gt;Routes: GET /health, GET /search, POST /ai/explain, POST /ai/plan, POST /chat&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Frontend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Next.js + TypeScript (port 3000)&lt;/td&gt;
&lt;td&gt;Single-page app with sidebar navigation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;SearchView.tsx&lt;/td&gt;
&lt;td&gt;Semantic search (near_text), AI explanations (single_prompt), Movie Night Planner (grouped_task)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;MovieCard.tsx&lt;/td&gt;
&lt;td&gt;Renders base64 poster inline, watchlist add/remove button&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;ChatView.tsx&lt;/td&gt;
&lt;td&gt;Multi-turn Query AI Agent chat&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;AppSidebar.tsx&lt;/td&gt;
&lt;td&gt;Navigation (Search/Chat), Weaviate logo + feature summary, watchlist manager with ‘.txt’ export&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Use the following prompts with Claude Code to generate the backend and frontend:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;/weaviate cookbooks 

Create &lt;span class="sb"&gt;`backend.py`&lt;/span&gt;: a FastAPI app with CORS enabled for localhost:3000.
Connect to Weaviate Cloud using credentials from .env with skip_init_checks=True.
The /search endpoint should return genre and vote_average alongside title, description, release_year, and poster.
Implement these routes:  
&lt;span class="p"&gt;
-&lt;/span&gt; GET  /health                  → {"status": "ok"}  
&lt;span class="p"&gt;-&lt;/span&gt; GET  /search?q=...&amp;amp;limit=3    → near_text on text_vector, return title/description/release_year/poster  
&lt;span class="p"&gt;-&lt;/span&gt; POST /ai/explain              → generate.near_text with single_prompt  
&lt;span class="p"&gt;-&lt;/span&gt; POST /ai/plan                 → generate.near_text with grouped_task  
&lt;span class="p"&gt;-&lt;/span&gt; POST /chat                    → QueryAgent.ask() with full message history

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Frontend Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Using Weaviate cookbooks frontend reference, create a Next.js TypeScript app in the frontend/ folder.
MovieCard.tsx should display a star rating (vote_average) and genre tag beneath the movie title. 

Components needed:  
&lt;span class="p"&gt;
-&lt;/span&gt; page.tsx        — SidebarProvider layout, view state (search | chat)  
&lt;span class="p"&gt;-&lt;/span&gt; SearchView.tsx  — search input, MovieCard grid, AI explain and plan buttons  
&lt;span class="p"&gt;-&lt;/span&gt; MovieCard.tsx   — poster image, title, year, description, watchlist button  
&lt;span class="p"&gt;-&lt;/span&gt; ChatView.tsx    — message bubbles, source citations, clear chat  
&lt;span class="p"&gt;-&lt;/span&gt; AppSidebar.tsx  — navigation, Weaviate logo + feature list, watchlist + exportBackend base URL from NEXT_PUBLIC_BACKEND_HOST env var (default localhost:8000)

Run backend and frontend servers with: uvicorn backend:app --reload --port 800 and npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, Claude Code will automatically build the app by adding relevant files and start both servers. You can start using the application immediately.&lt;/p&gt;

&lt;p&gt;The FastAPI backend runs at &lt;code&gt;http://localhost:8000/docs&lt;/code&gt;while the frontend app is available at &lt;code&gt;http://localhost:3000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also manually start both processes in separate terminals:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 1 — Backend &lt;/span&gt;
uvicorn backend:app &lt;span class="nt"&gt;--reload&lt;/span&gt; &lt;span class="nt"&gt;--port&lt;/span&gt; 8000
&lt;span class="c"&gt;# Terminal 2 — Frontend&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;frontend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Congratulations! You’ve completed the project without needing to do much manual configuration or coding.&lt;/strong&gt; 🔥&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo
&lt;/h3&gt;

&lt;p&gt;So far, we have used Weaviate Agent Skills with Claude Code to build a Semantic Movie Discovery Application powered by an OpenAI API key, a TMDB API key, and Weaviate.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/4udXaqI0PaQ"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Movie Discovery app we built includes the following features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Semantic search:&lt;/strong&gt; Describe a mood or theme and retrieve matching movies using vector-based search (&lt;code&gt;near_text&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI explanations:&lt;/strong&gt; Generate per-movie summaries using RAG with &lt;code&gt;single_prompt&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Movie Night Planner:&lt;/strong&gt; Create a viewing order, snack pairings and a theme summary using &lt;code&gt;grouped_task&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversational chat:&lt;/strong&gt; Ask questions about the movie collection through a chat interface powered by the Weaviate Query Agent, with source citations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watchlist:&lt;/strong&gt; Save movies during your session and export the list as a &lt;code&gt;.txt&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What’s Next?
&lt;/h3&gt;

&lt;p&gt;You could add image-based search to find similar movies and better meet your movie choices. You could also include a hybrid search feature that incorporates keyword-heavy queries and image search. &lt;/p&gt;

&lt;p&gt;You can take your app even further by getting up to speed with Weaviate’s latest &lt;a href="https://weaviate.io/blog" rel="noopener noreferrer"&gt;releases&lt;/a&gt; and becoming familiar with features such as server-side batching, async replication improvements, Object TTL and many more.&lt;/p&gt;

&lt;p&gt;To explore further, check out the latest Weaviate &lt;a href="https://weaviate.io/blog" rel="noopener noreferrer"&gt;releases&lt;/a&gt; and join the discussion on the &lt;a href="https://forum.weaviate.io/" rel="noopener noreferrer"&gt;community forum&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Weaviant Agent Skills in Action&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Weaviate modules were used in the application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text2vec-weaviate:&lt;/strong&gt; Responsible for text embeddings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi2multivec-weaviate:&lt;/strong&gt; Responsible for embedding images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generative-openai:&lt;/strong&gt; Integrates GPT directly into the query workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate Skill:&lt;/strong&gt; Creates a collection and imports data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate Cookbooks Skill:&lt;/strong&gt; For defining the app’s logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate Query Agent:&lt;/strong&gt; A higher-level abstraction that accepts natural language queries, decides the best query method, executes queries, synthesizes results and returns answers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Weaviate Agent Skills help in shipping faster and more accurate RAG applications. Backend development tasks such as schema inspection, data ingestion and search operations are automated and optimized. Ultimately, this helps developers save valuable development time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both MCP servers and Agent Skills provide useful patterns for building AI-powered applications. MCP servers are well-suited for exposing external tools and services through a standardized interface, while Agent Skills focus on guiding coding agents with structured workflows and best practices.&lt;/p&gt;

&lt;p&gt;In this tutorial, we demonstrated how Weaviate Agent Skills can simplify development by helping Claude Code generate correct database queries, ingestion pipelines and search logic. By combining vector search, multimodal storage and generative capabilities, we built a semantic movie discovery application with minimal manual setup.&lt;/p&gt;

&lt;p&gt;As agentic development environments continue to evolve, tools like MCP servers and Agent Skills will likely be used together. The key is understanding where each approach fits and selecting the one that best supports your application architecture.&lt;/p&gt;

&lt;p&gt;Happy building.&lt;/p&gt;




&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://modelcontextprotocol.io/docs/getting-started/intro" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/weaviate/agent-skills" rel="noopener noreferrer"&gt;Weaviate Agent Skills&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/overview" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Studio1HQ/movie-discovery-app" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt; for the Movie Discovery App&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>rag</category>
      <category>webdev</category>
    </item>
    <item>
      <title>We Cut Our MCP Token Spend in Half. Here's the Architecture</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Wed, 25 Mar 2026 19:04:52 +0000</pubDate>
      <link>https://dev.to/studio1hq/we-cut-our-mcp-token-spend-in-half-heres-the-architecture-1jic</link>
      <guid>https://dev.to/studio1hq/we-cut-our-mcp-token-spend-in-half-heres-the-architecture-1jic</guid>
      <description>&lt;p&gt;When we started scaling our MCP workflows, token usage was something we barely tracked. The system worked well, responses were accurate, and adding more tools felt like the right next step. Over time, the cost began rising in ways that did not align with how much the system was actually used.&lt;/p&gt;

&lt;p&gt;At first, we assumed this was due to higher usage or more complex queries. The data showed something else. Even simple requests were using more tokens than expected. This led us to ask a basic question. What exactly are we sending to the LLM on every call?&lt;/p&gt;

&lt;p&gt;A closer look made things clearer. The issue came from how the system was built. We handled context, tool definitions, and execution flow by adding extra tokens at every step.&lt;/p&gt;

&lt;p&gt;This article explains how we found the root cause and redesigned the architecture to fix it. The changes cut our MCP token usage by nearly half and gave us better control over how the system behaves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Token Usage in MCP Systems
&lt;/h2&gt;

&lt;p&gt;Once we started examining token usage, a clear pattern showed up. The LLM was receiving far more context than most requests actually needed. A large part of this came from tool definitions being sent repeatedly on every call.&lt;/p&gt;

&lt;p&gt;Each request included the full list of tools, even when only one or two were needed. On top of that, earlier outputs and intermediate results were passed back into the model. The context kept growing, even for simple queries.&lt;/p&gt;

&lt;p&gt;The execution flow added to the problem. The LLM would choose a tool, call it, process the result, and then repeat the same cycle if another step was needed. Each step added more tokens, and the same data often appeared many times across calls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraya207lc4ie4r2yqsd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraya207lc4ie4r2yqsd2.png" alt="Image1" width="800" height="1422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup worked at a small scale. As the number of tools increased, the cost grew quickly. More tools meant more context. More steps meant repeated processing. The system was doing extra work without adding real value. At this point, the cause was clear. Token usage came from how the system handled context and execution. The design itself was driving the overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Bifrost
&lt;/h2&gt;

&lt;p&gt;We started looking for a way to change how the system handled tool execution. The goal was simple. Reduce the amount of context sent to the LLM and avoid repeated processing across steps.&lt;/p&gt;

&lt;p&gt;During this process, we came across &lt;a href="https://www.getmaxim.ai/bifrost" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt;, an &lt;a href="https://github.com/maximhq/bifrost" rel="noopener noreferrer"&gt;open source&lt;/a&gt; MCP gateway. It works between the application, the model, and the tools. It brings structure for how tools are discovered and executed, so the LLM receives only what is needed on each call.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhnphaglsh5ymggy61oe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhnphaglsh5ymggy61oe.png" alt="Image" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This changed how we thought about the system. Tool access became more controlled. Context stayed limited to what was required for each request. The overall flow of execution became easier to follow and reason about.&lt;/p&gt;

&lt;p&gt;These changes directly addressed the issues we were seeing. Tool definitions were sent only when required. Repeated decision loops were reduced. The system handled execution in a more controlled and predictable way.&lt;/p&gt;

&lt;p&gt;From here, the focus moved away from adjusting prompts and toward changing how the system runs end-to-end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural Changes with Bifrost Code Mode
&lt;/h2&gt;

&lt;p&gt;The main change came from how execution was handled inside Bifrost. &lt;a href="https://docs.getbifrost.ai/mcp/code-mode" rel="noopener noreferrer"&gt;Code Mode&lt;/a&gt; is a Bifrost feature that changes how the LLM interacts with MCP tools. Earlier, the LLM handled both planning and step-by-step tool interaction. Each step required another call, and each call carried a growing context.&lt;/p&gt;

&lt;p&gt;Code Mode separates these responsibilities. The LLM focuses on planning. It generates executable code that defines the full workflow for a task. &lt;/p&gt;

&lt;p&gt;Code Mode works best when multiple MCP servers are involved, workflows have several steps, or tools need to share data. For simpler setups with one or two tools, Classic MCP works well.&lt;/p&gt;

&lt;p&gt;A mixed setup also works. Use Code Mode for heavier workflows like search or databases, and keep simple tools as direct calls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz78lp878cwfdmchwomm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz78lp878cwfdmchwomm.png" alt="Image2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Selecting the right tools&lt;/li&gt;
&lt;li&gt;Passing data between tools&lt;/li&gt;
&lt;li&gt;Defining how the final output is produced&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system exposes a minimal interface to the LLM. It can list available tools, read tool details, and, when required, understand how each tool works. Tool definitions are accessed on demand, which keeps the initial context small.&lt;/p&gt;

&lt;p&gt;Once the plan is generated, execution moves to a runtime environment. The code runs in a sandbox and interacts directly with tools. All intermediate steps, tool responses, and data transformations stay within this layer.&lt;/p&gt;

&lt;p&gt;This removes the need for repeated LLM calls during execution. The workflow runs in one pass, guided by the generated code. The LLM is involved mainly at the planning stage and for producing the final response if required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawpurvuv48ogzbgr1rdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawpurvuv48ogzbgr1rdu.png" alt="Image" width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow becomes more structured. A request comes in, relevant tools are identified, code is generated, and execution happens in a controlled environment. The system handles state and intermediate data outside the LLM.&lt;/p&gt;

&lt;p&gt;This approach improves clarity in how tasks are executed. The generated code can be inspected, debugged, and understood directly. Each request follows a defined path, which makes behavior easier to track and reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Bifrost CLI in Our Workflow
&lt;/h2&gt;

&lt;p&gt;Getting started required two commands. First, start the gateway:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then launch the CLI from a separate terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;MCP servers are registered once through the API. The key flag is &lt;code&gt;is_code_mode_client&lt;/code&gt;, which tells Bifrost to handle that server through Code Mode instead of sending its tool definitions on every request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8080/api/mcp/client &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "name": "youtube",
    "connection_type": "http",
    "connection_string": "http://localhost:3001/mcp",
    "tools_to_execute": ["*"],
    "is_code_mode_client": true
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once registered, the LLM discovers tools on demand using &lt;code&gt;listToolFiles&lt;/code&gt; and &lt;code&gt;readToolFile&lt;/code&gt;, then submits a full execution plan through &lt;code&gt;executeToolCode&lt;/code&gt;. A workflow that previously took six LLM turns now completes in three to four.&lt;/p&gt;

&lt;p&gt;Bifrost organizes tool definitions using two binding levels. Server-level (default) groups all tools from a server into one &lt;code&gt;.pyi&lt;/code&gt; file. Tool-level gives each tool its own file — better for servers with 30+ tools. Set it once in &lt;code&gt;config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tool_manager_config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"code_mode_binding_level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"server"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Debugging became simpler because the generated code is the execution plan. When something went wrong, the issue was visible directly in the code rather than buried in prompt chains. This setup also made execution easier to inspect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;youtube&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AI infrastructure&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;maxResults&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;titles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;snippet&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;items&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;titles&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;titles&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;count&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;titles&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The execution runs in a Starlark interpreter, a restricted subset of Python. A few constraints to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No import statements, file I/O, or network access&lt;/li&gt;
&lt;li&gt;Classes are not supported, use dictionaries&lt;/li&gt;
&lt;li&gt;Tool calls run synchronously; async handling is not required&lt;/li&gt;
&lt;li&gt;Each tool call has a default timeout of 30 seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code Mode also works with &lt;a href="https://docs.getbifrost.ai/mcp/agent-mode" rel="noopener noreferrer"&gt;Agent Mode&lt;/a&gt; for automated workflows. The &lt;code&gt;listToolFiles&lt;/code&gt; and &lt;code&gt;readToolFile&lt;/code&gt; tools are always auto-executable since they are read-only. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;executeToolCode&lt;/code&gt; tool only auto-executes if every tool call within the generated code is on the approved list. If any call falls outside that list, Bifrost returns it to the user for approval before running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on Token Usage and System Efficiency
&lt;/h2&gt;

&lt;p&gt;The reduction in token usage came from four specific changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tool schemas were sent only when required&lt;/li&gt;
&lt;li&gt;Intermediate outputs stayed within the execution layer&lt;/li&gt;
&lt;li&gt;Repeated context across steps was removed&lt;/li&gt;
&lt;li&gt;Fewer LLM calls were needed, since execution moved to a sandbox and ran in a single flow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These changes had a clear effect. Token usage dropped by nearly half. Latency reduced along with it. Execution became more predictable, since each request followed a defined path with fewer moving parts.&lt;/p&gt;

&lt;p&gt;The broader takeaway is clear. Token cost comes from system design. Small changes in prompts or outputs help at the edges. The main overhead comes from the system's structure.&lt;/p&gt;

&lt;p&gt;LLMs work best when they focus on planning. Managing execution through repeated loops adds cost and introduces variability. A separate execution layer keeps the flow stable and easier to understand. Context also needs careful control. It should be built for each request with only the required information. Letting it grow across steps results in unnecessary overhead and increased token usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Token inefficiency in MCP workflows comes from system design. Bifrost and Code Mode introduced a clear separation between planning and execution. The LLM handles planning, and the runtime handles execution. This brought immediate and measurable improvements in both cost and system behavior.&lt;/p&gt;

&lt;p&gt;If you are working with MCP workflows at scale, &lt;a href="https://www.getmaxim.ai/bifrost" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt; is worth exploring. The &lt;a href="https://docs.getbifrost.ai/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; provides a good starting point to set up the gateway, connect servers, and run workflows using Code Mode.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Managing Multi Provider AI Workflows in the Terminal with Bifrost CLI</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Sat, 21 Mar 2026 10:52:18 +0000</pubDate>
      <link>https://dev.to/studio1hq/managing-multi-provider-ai-workflows-in-the-terminal-with-bifrost-cli-ece</link>
      <guid>https://dev.to/studio1hq/managing-multi-provider-ai-workflows-in-the-terminal-with-bifrost-cli-ece</guid>
      <description>&lt;p&gt;Command-line tools are still a common way to work with AI. They give better control and fit naturally into everyday workflows, which is why many people continue to use them.&lt;/p&gt;

&lt;p&gt;A common issue with CLI-based tools is that they are often tied to a single provider. Switching between options usually means updating configs and handling multiple API keys. In some cases, it may even involve changing tools. This can slow things down and make everyday work feel a bit frustrating.&lt;/p&gt;

&lt;p&gt;Bifrost CLI aims to simplify this setup. It provides a single way to connect CLI tools to multiple providers, without changing how the tools are used. In this article, let us look at how it works and how to get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Bifrost CLI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.getmaxim.ai/bifrost" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt; is an &lt;a href="https://github.com/maximhq/bifrost" rel="noopener noreferrer"&gt;open-source AI gateway&lt;/a&gt; that works between applications and model providers. It offers provider-compatible endpoints such as OpenAI, Anthropic, and Gemini formats. It manages request routing, API keys, and response formatting in one place, so separate setups for each provider are not required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.getbifrost.ai/quickstart/cli/getting-started" rel="noopener noreferrer"&gt;Bifrost CLI&lt;/a&gt; was recently released to extend this setup to command-line workflows. It allows existing CLI tools to connect through the Bifrost gateway in place of calling providers directly. The CLI tool continues to work in the same way, with only the endpoint updated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq1juk7uh66enws74o00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq1juk7uh66enws74o00.png" alt="Bitfrost CLI" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CLI tool is configured with Bifrost as the base URL. After this, all requests go through the gateway. Bifrost routes each request to the selected provider, converts it into the required API format, and returns a compatible response. The CLI workflow stays the same, with support for multiple providers through a single endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of Bifrost CLI
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI brings several practical features that improve how CLI-based workflows are set up and managed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Setup for CLI Tools:&lt;/strong&gt; Configures base URLs, API keys, and model settings for each agent. This reduces manual steps and keeps the environment ready to use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Discovery from Gateway:&lt;/strong&gt; Fetches available models directly from the Bifrost gateway using the &lt;code&gt;/v1/models&lt;/code&gt; endpoint. This ensures the CLI always reflects the current set of available options.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Integration for Tool Access:&lt;/strong&gt; Attaches Bifrost’s MCP server to tools like Claude Code. This allows access to external tools and extended capabilities from within the CLI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session Activity Indicators:&lt;/strong&gt; Displays activity badges for each tab. It becomes easy to see if a session is running, idle, or has triggered an alert.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Credential Storage:&lt;/strong&gt; Stores selections and keys securely. Virtual keys are saved in the OS keyring and are not written in plain text on disk.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI is quick to set up and runs directly from the terminal. The flow includes starting the gateway, launching the CLI, and selecting the agent and model through a guided setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Start the Bifrost Gateway
&lt;/h3&gt;

&lt;p&gt;Make sure the gateway is running locally (default: &lt;code&gt;http://localhost:8080&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Install and Launch Bifrost CLI
&lt;/h3&gt;

&lt;p&gt;In a new terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk8d7qf1ycaa6vmnnnu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk8d7qf1ycaa6vmnnnu1.png" alt="Terminal" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If installed, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Enter Gateway Details
&lt;/h3&gt;

&lt;p&gt;Provide the Bifrost endpoint URL.&lt;/p&gt;

&lt;p&gt;For local setup, this is usually: &lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If authentication is enabled, you can also enter a virtual key at this stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Choose a CLI Agent
&lt;/h3&gt;

&lt;p&gt;Select the CLI agent you want to use, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Codex CLI&lt;/li&gt;
&lt;li&gt;Claude Code&lt;/li&gt;
&lt;li&gt;Gemini CLI&lt;/li&gt;
&lt;li&gt;Opencode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The CLI shows which agents are available and can install missing ones during setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg89ikeylge3qwp6wpyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg89ikeylge3qwp6wpyc.png" alt="CLI UI" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Select a Model
&lt;/h3&gt;

&lt;p&gt;The CLI fetches available models from the gateway and shows them in a searchable list.&lt;/p&gt;

&lt;p&gt;You can choose one directly or enter a model name manually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsicwoqe9jep2tpd8jt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsicwoqe9jep2tpd8jt9.png" alt="Choose model name" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Launch the Session
&lt;/h3&gt;

&lt;p&gt;Review the configuration and start the session. The selected agent runs with the chosen model and setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Work with Sessions
&lt;/h3&gt;

&lt;p&gt;After launch, the CLI stays open in a tabbed interface.&lt;/p&gt;

&lt;p&gt;You can open new sessions, switch between them, or close them without restarting the CLI. Each tab shows the current activity state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Bifrost CLI Session Flow
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI is built for repeated, session-based use in the terminal. You can switch between runs, update settings, and continue your work without having to go through the full setup again each time. &lt;/p&gt;

&lt;p&gt;Here are the key steps in the session flow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3226ybyduzng3dmsakr0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3226ybyduzng3dmsakr0.png" alt="Session flow" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Launch:&lt;/strong&gt; Select the agent and model, then start the session.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Work:&lt;/strong&gt; Use the agent as usual. All requests go through Bifrost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Switch Sessions:&lt;/strong&gt; Press &lt;code&gt;Ctrl + B&lt;/code&gt; to open the tab bar, switch between sessions, or start a new one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Return:&lt;/strong&gt; When a session ends, the CLI returns to the setup screen with the previous configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relaunch:&lt;/strong&gt; Change the agent or model, or rerun the same setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence:&lt;/strong&gt; The last configuration is saved and shown the next time the CLI starts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Working with Multiple Models
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI makes it easy to work with different models from the same setup. You do not need to change configurations or restart the tool each time you want to try a different option.&lt;/p&gt;

&lt;p&gt;During setup, the CLI fetches available models from the Bifrost gateway and shows them in a list. You can select one directly or enter a model name if you already know what you want to use.&lt;/p&gt;

&lt;p&gt;If you want to try another model, you can start a new session and choose a different one. Each session runs separately, so you can compare outputs or test different setups side by side.&lt;/p&gt;

&lt;p&gt;All requests go through Bifrost, so differences between providers are handled in the background. The CLI experience stays the same across models.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use Bifrost CLI
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI is useful when working with multiple providers or running repeated sessions from the terminal. Since it is built on top of Bifrost, it also brings the benefits of a central gateway into CLI workflows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Testing Different Models:&lt;/strong&gt; Try different models across providers from the same setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running Iterative Sessions:&lt;/strong&gt; Start, stop, and relaunch sessions with minor configuration changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Working from the Terminal:&lt;/strong&gt; Keep the entire workflow inside the CLI, with Bifrost handling routing in the background.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comparing Outputs:&lt;/strong&gt; Run multiple sessions side by side and observe how different models respond.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managing Multiple Providers:&lt;/strong&gt; Use Bifrost as a single entry point to work across providers in one place.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Control with Bifrost:&lt;/strong&gt; Route all requests through Bifrost for consistent handling of API keys, requests, and responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup helps keep workflows consistent and organized across different providers and sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI brings multi-provider access into the terminal through a single setup. It keeps existing workflows intact and reduces the need to manage separate configurations.&lt;/p&gt;

&lt;p&gt;You can run sessions, switch agents, and try different models from the same interface, with Bifrost handling routing and integration in the background.&lt;/p&gt;

&lt;p&gt;To get started or explore more details, check the &lt;a href="https://docs.getbifrost.ai/quickstart/cli/getting-started" rel="noopener noreferrer"&gt;Bifrost CLI documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>python</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why Your OpenClaw Agent Gets Slower and More Expensive Over Time</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Fri, 20 Mar 2026 21:00:34 +0000</pubDate>
      <link>https://dev.to/studio1hq/why-your-openclaw-agent-gets-slower-and-more-expensive-over-time-5c5e</link>
      <guid>https://dev.to/studio1hq/why-your-openclaw-agent-gets-slower-and-more-expensive-over-time-5c5e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;OpenClaw feels fast in the first week. You send a message, the agent responds, and the workflow makes sense. Then gradually, without any obvious change, responses take a little longer, and the API bill at the end of the month is higher than it was two weeks ago, with no single thing you can point to as the cause.&lt;/p&gt;

&lt;p&gt;That is not a coincidence, and it is not bad luck. It is what happens when three separate problems compound on each other quietly, over time, without any of them being obvious on its own.&lt;/p&gt;

&lt;p&gt;Context bloating, static content being reprocessed on every call, and every request hitting the same model regardless of what it actually needs, these are not dramatic failures. They are the kind of inefficiencies that feel invisible until they are not, and by the time the invoice makes them obvious, they have been running for weeks.&lt;/p&gt;

&lt;p&gt;In this post, we will break down what is driving each of them and why routing, not prompt tuning or model switching, is the fix that addresses all three at the layer where they actually live.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why the Default Setup Works Against You Over Time&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenClaw's default configuration is built to get you started. It is not designed to remain efficient as your usage grows, and the gap between the two becomes apparent faster than most people expect. Three things are responsible for most of it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Context grows faster than you think&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before you type a single message, your agent has already loaded a significant amount into the context window. &lt;code&gt;SOUL.md&lt;/code&gt;, &lt;code&gt;AGENTS.md&lt;/code&gt;, bootstrap files, the results of a memory search against everything you have accumulated, all of it lands in the prompt before your request even starts.&lt;/p&gt;

&lt;p&gt;That base footprint is manageable in week one. By week three, the memory graph has grown, the search results are broader, and the conversation history from your previous sessions is traveling with every new request. The agent is not selectively pulling relevant data; it loads everything it has access to every time.&lt;/p&gt;

&lt;p&gt;The result is a base token cost per request that is meaningfully higher than it was when you started, without any deliberate change on your part.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Static tokens are processed fresh every time&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A large portion of what is loaded into every request consists of content that has not changed since last week, system instructions, bootstrap files, and agent configuration. Provider-side caching exists specifically to avoid paying full price for static content on repeat calls, but the default OpenClaw setup does not use it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zje4v0xidwlrqxrneqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zje4v0xidwlrqxrneqi.png" alt="Every call. Same cost. No cache" width="800" height="446"&gt;&lt;/a&gt; The same unchanged content, reprocessed from scratch on every heartbeat call.&lt;/p&gt;

&lt;p&gt;Every call processes that unchanged content from scratch. For a setup running a 30-minute heartbeat, that means a full API call with no caching, hitting the configured model, every half hour, regardless of whether anything meaningful is happening in the session. Most users never think of the heartbeat as a cost source, but over a full month, it adds up to a figure worth paying attention to.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Every request hits the same model&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenClaw routes all requests to a single globally configured model. There is no built-in distinction among task types: a status check, a memory lookup, a formatting task, and a multi-step reasoning problem all map to the same endpoint at the same price.&lt;/p&gt;

&lt;p&gt;In practice, the majority of what an agent handles day-to-day is simple work. Summaries, lookups, structured output, short responses. None of it requires a frontier model, but all of it gets one anyway. That is not a usage problem; it is a configuration gap, and it is the highest-leverage thing to fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Structural Fix: Routing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The problem with every approach people try first, switching to a cheaper model, trimming prompts, and reducing heartbeat frequency, is that they address one variable at a time. The bill declines slightly, then rises again. What is needed is a layer that sits between OpenClaw and the provider, evaluates each request before it is sent, and determines which model to route it to. That is what routing is, and that is why it is a structural fix rather than a configuration tweak.&lt;/p&gt;

&lt;p&gt;That layer is &lt;a href="https://manifest.build/" rel="noopener noreferrer"&gt;Manifest&lt;/a&gt;, an open-source OpenClaw plugin built specifically to solve this. It sits between your agent and the provider, and the original OpenClaw configuration remains unchanged.&lt;/p&gt;

&lt;p&gt;Manifest intercepts every request before it reaches the LLM. The routing decision takes under 2 ms with zero external calls, after which the request is forwarded to the appropriate model. During that interval, five distinct mechanisms run before the request moves anywhere, starting with how the scoring algorithm decides which tier a request belongs to.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How the scoring algorithm works&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before any request leaves your setup, Manifest runs a scoring pass across 23 dimensions. These dimensions fall into two groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;13 keyword-based checks that scan the prompt for patterns like "prove", "write function", or "what is", and&lt;/li&gt;
&lt;li&gt;10 structural checks that evaluate token count, nesting depth, code-to-prose ratio, tool count, and conversation depth, among others.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each dimension carries a weight. The weighted sum maps to one of four tiers through threshold boundaries. Alongside the tier assignment, Manifest produces a confidence score between 0 and 1 that reflects how clearly the request fits that tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe211zoxmig3h19y1wzrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe211zoxmig3h19y1wzrg.png" alt="Manifest scores" width="800" height="450"&gt;&lt;/a&gt; How Manifest scores a request across 23 dimensions and assigns it a tier in under 2 ms.&lt;/p&gt;

&lt;p&gt;One edge case worth knowing: short follow-up messages like "yes" or "do it" do not get scored in isolation. Manifest tracks the last 5 tier assignments within a 30-minute window and uses that session momentum to keep follow-ups at the right tier, rather than dropping them to simple because they contain almost no content.&lt;/p&gt;

&lt;p&gt;Certain signals also force a minimum tier regardless of score. Detected tools push the floor to the standard. Context above 50,000 tokens forces complex. Formal logic keywords move the request directly to reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The four tiers and what they route&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The tier system is where the cost reduction actually happens. Manifest defines four tiers, each mapped to a different class of model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple:&lt;/strong&gt; greetings, definitions, short factual questions. Routed to the cheapest model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard:&lt;/strong&gt; general coding help, moderate questions. Good quality at low cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex:&lt;/strong&gt; multi-step tasks, large context, code generation. Best quality models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning:&lt;/strong&gt; formal logic, proofs, math, multi-constraint problems. Reasoning-capable models only.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a typical active session, most requests fall into the simple or standard category. Routing those away from frontier models, while sending only what genuinely needs it to complex or reasoning, is where the up to 70% cost reduction reported by users comes from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtsw09jblpxao5q4zcco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtsw09jblpxao5q4zcco.png" alt="Manifest maps each request type" width="800" height="450"&gt;&lt;/a&gt; How Manifest maps each request type to the cheapest model that can handle it.&lt;/p&gt;

&lt;p&gt;Every routed response returns three headers you can inspect: &lt;code&gt;X-Manifest-Tier&lt;/code&gt;, &lt;code&gt;X-Manifest-Model&lt;/code&gt;, and &lt;code&gt;X-Manifest-Confidence&lt;/code&gt;. If a request was routed differently than you expected, those headers tell you exactly what the algorithm saw.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OAuth and provider auth&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Manifest lets users authenticate with their own Anthropic or OpenAI credentials directly through OAuth. If OAuth is unavailable or a session is inactive, it falls back to an API key. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre8deeyllgjry2faztj6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre8deeyllgjry2faztj6.gif" alt="Manifest Auth" width="600" height="337"&gt;&lt;/a&gt; Manifest lets users authenticate with their own Anthropic or OpenAI credentials&lt;/p&gt;

&lt;p&gt;This keeps your model access under your own account, which matters for rate limits, spend visibility, and not routing your traffic through a third-party proxy. More providers are being added.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Fallbacks and what they protect&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each tier supports up to 5 fallback models. If the primary model for a tier is unavailable or rate-limited, Manifest automatically moves to the fallback chain. The request still resolves, just against the next available model in that tier's list. This is particularly relevant for the reasoning tier, where model availability can be less predictable during high-traffic periods, and losing a request entirely is more costly than a slight capability downgrade.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Spend limits without manual monitoring&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Manifest lets you set rules per agent against two metrics: tokens and cost. Each rule has a period (hourly, daily, weekly, or monthly), a threshold, and an action. Notify sends an email alert when the threshold is crossed. Block returns HTTP 429 and stops requests until the period resets.&lt;/p&gt;

&lt;p&gt;Rules that block are evaluated on every ingest, while rules that notify run on an hourly cron and fire once per rule per period to avoid repeated alerts for the same breach. For a setup with a 30-minute heartbeat running continuously, a daily cost block is the most direct way to prevent a runaway spend event from compounding overnight without any manual check.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rest Is Worth Knowing
&lt;/h2&gt;

&lt;p&gt;Routing is the core of what Manifest does, but it ships with a few other things that are worth understanding before you use it in production.&lt;/p&gt;

&lt;p&gt;Manifest provides a dashboard that gives a full view of each call: input tokens, output tokens, cache-read tokens, cost, latency, model, and routing tier. Cost is calculated against a live pricing table covering 600+ models, so nothing is estimated. The message log stores all requests and is filterable by agent, model, and time range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfc9ccmp7jszkhuo2ik6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfc9ccmp7jszkhuo2ik6.png" alt="Manifest dashboard" width="800" height="450"&gt;&lt;/a&gt; Manifest dashboard&lt;/p&gt;

&lt;p&gt;In local mode, nothing leaves your machine. In cloud mode, only OpenTelemetry metadata is sent: model name, token counts, and latency. Message content never moves. The full codebase is open source and self-hostable at &lt;a href="https://github.com/mnfst/manifest" rel="noopener noreferrer"&gt;github.com/mnfst/manifest&lt;/a&gt;, and the routing logic is fully documented.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A quick note before we move on.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Everything in this post reflects how Manifest works at the time of writing, and the space is moving fast enough that some details may already look different by the time you read it. The OAuth providers, the supported models, the scoring thresholds, and the team were shipping changes even while this article was being written. For anything that has moved since, the &lt;a href="https://manifest.build/docs/introduction" rel="noopener noreferrer"&gt;docs&lt;/a&gt; are the right place to check.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With that said, back to the article. Here is how all of it fits together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It Together
&lt;/h2&gt;

&lt;p&gt;The three problems do not take turns. They compound on the same request, every time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8605ni77vqje298aahx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8605ni77vqje298aahx.png" alt="Three problems" width="800" height="450"&gt;&lt;/a&gt; Three problems converging into every single request, all at once.&lt;/p&gt;

&lt;p&gt;A heartbeat call on a 30-minute cycle loads accumulated context, reprocesses unchanged system files, and hits a frontier model for a task that needed none of that. Week one is a small number. In week three, it is a pattern you cannot see until the invoice lands.&lt;/p&gt;

&lt;p&gt;Routing is the layer that addresses all three at once, not because it solves context or caching directly, but because it changes the cost of every request before it leaves your setup, and once that layer is in place, the three problems no longer have room to compound.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Where to Start&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The order matters here. Do not start by switching models or trimming prompts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Manifest and let it run for a few days without changing anything else. The dashboard will show you where the cost is actually coming from.&lt;/li&gt;
&lt;li&gt;Check the model distribution. If simple and standard requests are hitting your highest-tier model, routing is the first thing to configure.&lt;/li&gt;
&lt;li&gt;Set a daily cost block rule to prevent a runaway session from compounding overnight.&lt;/li&gt;
&lt;li&gt;Once routing is active, the cache read token metric indicates how much static content was served from cache versus processed fresh. That number is worth watching.&lt;/li&gt;
&lt;li&gt;Add per-tier fallbacks to prevent availability gaps from interrupting the session.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;a href="https://manifest.build/docs/introduction" rel="noopener noreferrer"&gt;&lt;strong&gt;Manifest docs&lt;/strong&gt;&lt;/a&gt; cover installation, routing configuration, and limit setup in full. If you want the broader context on what makes OpenClaw production-ready, &lt;a href="https://dev.to/arindam_1729/5-openclaw-plugins-that-actually-make-it-production-ready-14kn"&gt;this post&lt;/a&gt; is a good place to start.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>Running LLM Applications Across Providers with Bifrost</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Tue, 17 Mar 2026 16:15:23 +0000</pubDate>
      <link>https://dev.to/studio1hq/running-llm-applications-across-providers-with-bifrost-313h</link>
      <guid>https://dev.to/studio1hq/running-llm-applications-across-providers-with-bifrost-313h</guid>
      <description>&lt;p&gt;Many modern applications include AI features that rely on large language models accessed through APIs. When an application sends a prompt to a model and receives a response, that request usually goes through an external service.&lt;/p&gt;

&lt;p&gt;Getting access to different LLM models is easier today. Providers such as &lt;a href="https://platform.openai.com/api-keys" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; and &lt;a href="https://platform.claude.com/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt; provide model APIs, and platforms like &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; and &lt;a href="https://cloud.google.com/vertex-ai" rel="noopener noreferrer"&gt;Google Vertex&lt;/a&gt; AI give access to several models from one place. Because of this, many applications connect to more than one provider to compare models, manage cost, or keep a backup option if one service fails.&lt;/p&gt;

&lt;p&gt;But each provider works a little differently. Authentication methods, rate limits, and request formats are not the same. Managing these differences inside an application can slowly add complexity to the system. In this article, let us explore Bifrost, an open-source LLM gateway that provides a single layer to route requests and manage interactions with multiple model providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost of Provider Integrations
&lt;/h2&gt;

&lt;p&gt;Connecting to several LLM providers may look simple at the start. Adding another provider can feel like just integrating one more API.&lt;/p&gt;

&lt;p&gt;That situation changes once the application runs in production. Requests may need to go to different models based on cost, response quality, or latency. If a provider slows down or becomes unavailable, the system must redirect requests to another provider and keep the service running.&lt;/p&gt;

&lt;p&gt;Handling these situations introduces additional logic into the codebase. The application needs to manage how requests are routed between models. It must also include retry logic for failed calls, fallback providers during outages, and tracking for how requests are distributed across models.&lt;/p&gt;

&lt;p&gt;Each of these responsibilities adds extra work to the system. Over time, operational logic becomes part of the application and increases maintenance effort. This overhead becomes the hidden cost of working directly with multiple model providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Bifrost: A Gateway for LLM Infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.getbifrost.ai/overview" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt; is an &lt;a href="https://github.com/maximhq/bifrost" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; LLM and MCP gateway designed to manage interactions between applications and model providers. It sits between the application and the LLM services and acts as a central layer that controls how requests move between systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyseg3iy2fg1v6h6yhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyseg3iy2fg1v6h6yhe.png" alt="Image1" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Applications often connect directly to each provider they use. Bifrost adds a gateway layer between the application and the providers, so requests pass through a single entry point before reaching the model services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffygdaoyre598cw4i7cdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffygdaoyre598cw4i7cdw.png" alt="Image2" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This structure separates provider management from the application. The application sends requests to one endpoint, and the gateway manages communication with different model providers. Provider configuration and request handling stay inside the gateway layer, reducing provider-specific logic in the application code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Infrastructure Capabilities
&lt;/h2&gt;

&lt;p&gt;Bifrost provides several infrastructure capabilities for managing LLM interactions across providers. These capabilities move provider-specific handling out of the application and into the gateway layer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-provider routing:&lt;/strong&gt; Bifrost supports multiple AI providers through a single API interface. Applications send requests to one endpoint, and the gateway routes each request to the configured provider or model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load balancing:&lt;/strong&gt; When multiple providers or API keys are configured, Bifrost distributes requests across them based on defined rules. Traffic spreads across providers and reduces the chance of hitting rate limits on a single service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic fallback:&lt;/strong&gt; When a provider returns an error or becomes unavailable, Bifrost sends the request to another configured provider.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic caching:&lt;/strong&gt; Bifrost stores responses and returns them for similar prompts. Prompt comparison uses semantic similarity. This reduces repeated API calls and improves response time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Platform Support and Integrations
&lt;/h2&gt;

&lt;p&gt;Bifrost fits environments where applications use multiple models and providers. The gateway exposes an OpenAI-compatible API, so applications that already use OpenAI SDKs can connect with minimal changes and send requests through a single endpoint.&lt;/p&gt;

&lt;p&gt;Bifrost works with several &lt;a href="https://docs.getbifrost.ai/providers/supported-providers/overview" rel="noopener noreferrer"&gt;LLM providers&lt;/a&gt;, such as OpenAI, Anthropic, Amazon Bedrock, Google Vertex AI, Cohere, and Mistral. Applications can reach these providers through the same gateway interface.&lt;/p&gt;

&lt;p&gt;The gateway also supports the &lt;a href="https://docs.getbifrost.ai/mcp/overview" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt;. Systems that use MCP can connect tools and external services through the same layer used for model requests. Bifrost also includes a &lt;a href="https://docs.getbifrost.ai/plugins/getting-started" rel="noopener noreferrer"&gt;plugin system&lt;/a&gt; for adding custom behavior such as request validation, logging, or request transformation.&lt;/p&gt;

&lt;p&gt;Bifrost can run using tools such as NPX or Docker and can operate in local setups or production environments. The project is open source under the MIT license and can run across different infrastructure environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Gateway Performance and Benchmark&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A gateway processes every request sent to a model provider. The performance of this layer becomes important in systems that handle a large number of AI requests.&lt;/p&gt;

&lt;p&gt;Bifrost is written in Go, a language often used for backend services that process many requests simultaneously. The system focuses on keeping the extra processing time very small.&lt;/p&gt;

&lt;p&gt;Benchmark tests show that Bifrost adds about 11 microseconds of latency at 5,000 requests per second. One microsecond equals 0.001 milliseconds, so 11 microseconds equals 0.011 milliseconds, which means the delay introduced by the gateway remains extremely small.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://docs.getbifrost.ai/benchmarking/getting-started" rel="noopener noreferrer"&gt;published benchmarks&lt;/a&gt; were executed on AWS EC2 t3.medium and t3.large instances. These are cloud virtual machines with moderate CPU and memory resources that are commonly used to run backend services and APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqud1pe1ewno7lns871w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqud1pe1ewno7lns871w.png" alt="Image3" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bifrost also provides a &lt;a href="https://github.com/maximhq/bifrost-benchmarking" rel="noopener noreferrer"&gt;public benchmarking repository&lt;/a&gt; with the scripts and setup used in the tests. Anyone can run the same tests or perform custom benchmarking based on their own infrastructure, traffic patterns, or model providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Bifrost
&lt;/h2&gt;

&lt;p&gt;Bifrost is designed for quick setup and can run locally or in a server environment. The gateway can start in a few steps and begin routing LLM requests through a single endpoint.&lt;/p&gt;

&lt;p&gt;One way to start Bifrost is by using &lt;strong&gt;NPX&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bifrost can also run using &lt;strong&gt;Docker&lt;/strong&gt;, which allows the gateway to start inside a container environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the gateway starts, applications can send LLM requests to the Bifrost endpoint. The gateway then routes the requests to the configured model providers.&lt;/p&gt;

&lt;p&gt;Configuration options allow the gateway to define providers, API keys, routing rules, caching behavior, and fallback settings. These configurations control how requests move between different LLM providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;Managing several LLM providers inside an application can introduce extra operational logic and maintenance effort. A gateway layer offers a cleaner structure for handling these interactions.&lt;/p&gt;

&lt;p&gt;Bifrost provides this layer by placing a gateway between applications and model providers. Requests go through one endpoint, and the gateway manages routing and provider communication.&lt;/p&gt;

&lt;p&gt;This approach keeps provider integrations outside the core application code and places request management in a separate infrastructure layer.&lt;/p&gt;

&lt;p&gt;To explore configuration options, deployment steps, and additional features, &lt;a href="https://docs.getbifrost.ai/overview" rel="noopener noreferrer"&gt;refer to the official Bifrost documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>proxy</category>
      <category>litellm</category>
    </item>
    <item>
      <title>Create Your Custom WSL from Any Linux Distribution (Part - 2)</title>
      <dc:creator>Debajyati Dey</dc:creator>
      <pubDate>Tue, 10 Dec 2024 14:00:00 +0000</pubDate>
      <link>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-2-1h2j</link>
      <guid>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-2-1h2j</guid>
      <description>&lt;p&gt;In the previous part of this two-part blog series, we discussed how to install and set up Void Linux in WSL. In this article we'll cover how to do the same for Arch Linux! Hell Yeah!!&lt;/p&gt;

&lt;p&gt;In the former blog, we went through how to obtain the tar of the desired Distro using a docker container. Here we will see how to obtain the tar if we don't have access to a working docker container.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Summary of the Content Prior to Reading the Article&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In this article, I'll guide you through installing Arch Linux on WSL without using Docker, instead opting for a VirtualBox VM to create the necessary tar file. We'll cover generating the tar archive, transferring it to your host machine, and importing it into WSL. Additionally, we'll discuss fixing the common automounting error post-installation. This method ensures you can enjoy Arch Linux on WSL, leveraging the flexibility of VM-based installation.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Installing Arch Linux in WSL
&lt;/h2&gt;

&lt;p&gt;Unfortunately, the official docker image of archlinux is not usable now (I think so). Because I faced a lot of issues when inside the container running. The commands didn't ever execute and always getting a newline whenever pressed enter. Very weird. Whatever...&lt;/p&gt;

&lt;p&gt;So docker is NOT going to get our job done. Instead, we can use virtualbox to create a VM instance of Arch Linux. I am not going to give a complete archlinux installation tutorial here. That will be too much for this article. There are plenty of tutorials available in YouTube of archlinux installation on virtualbox. And if you are a &lt;strong&gt;REAL&lt;/strong&gt; NERD Linux &lt;strong&gt;fanboy&lt;/strong&gt; (like me!!!) you may want to install Arch "&lt;strong&gt;The Arch Way&lt;/strong&gt;" (without &lt;code&gt;archinstall&lt;/code&gt; script!).&lt;/p&gt;

&lt;h3&gt;
  
  
  Assuming You Have Already Done A Base Installation inside VBox
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Getting the Tar File
&lt;/h4&gt;

&lt;p&gt;Run this command(assuming you are the root user and currently in the &lt;code&gt;/root&lt;/code&gt; directory), inside the virtual machine to generate the needed whole system archive -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-cpvzf&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;archlinux.tar.gz&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;/proc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;/sys&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;/root/archlinux.tar.gz&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--one-file-system&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me break down the command into understandable parts.&lt;/p&gt;

&lt;p&gt;here the &lt;code&gt;--one-file-system&lt;/code&gt; flag means -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fao9dns3vuqhetpzay3pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fao9dns3vuqhetpzay3pt.png" alt="meaning of the --one-file-system flag" width="773" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have excluded the &lt;code&gt;/proc&lt;/code&gt; and &lt;code&gt;/sys&lt;/code&gt; directories to make the tar file comparatively less bulky in size. It is safe because, they will be generated anyway while importing in WSL.&lt;/p&gt;

&lt;p&gt;And finally, you may already understand why we excluded the tar file itself.&lt;br&gt;&lt;br&gt;
Because if we did include the tar, there's a 99% probability that, 2 &lt;strong&gt;TERRIFYING things&lt;/strong&gt; could happen!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-Referencing File&lt;/strong&gt;: The tar command would attempt to include the &lt;code&gt;archlinux.tar.gz&lt;/code&gt; file in the archive. This can lead to recursive inclusion, where the tar process continually adds the same file over and over, causing an infinite loop of inclusion until disk space runs out or the process is forcibly terminated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Import Issues&lt;/strong&gt;: Even If the tar command does manage to execute without crashing, when you attempt to import this archive on WSL, including the &lt;code&gt;archlinux.tar.gz&lt;/code&gt; file can cause confusion. The import process (which involves extraction) might attempt to re-extract the tar file recursively, complicating the extraction process and potentially leading to errors like system overload or any kind of unexpected results.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It will take a certain amount of time to complete the making of the full system backup.&lt;/p&gt;

&lt;p&gt;So now you have the tar file in your current directory. Check that with &lt;code&gt;ls&lt;/code&gt; command.&lt;/p&gt;
&lt;h4&gt;
  
  
  Transferring the tar file
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In order to get the tar file outside the VM. You have many options. The most preferable one would be to install an advanced DE(Desktop Environment) like, XFCE or GNOME and then install the the virtualbox guest additions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After you have the guest additions installed, shutdown the VM and run it again. When guest additions are installed in the system, we can use the shared folders feature of virtualbox.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuerk95r455xladcd8ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuerk95r455xladcd8ad.png" alt="Using Shared Folders in VBox" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quickly setup a shared folder and then transfer the tar into the host machine from the guest through the shared folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hurray! Now you have the tar file in your host machine ready to be imported.&lt;/p&gt;
&lt;h4&gt;
  
  
  Importing it into WSL
&lt;/h4&gt;

&lt;p&gt;Provide the appropriate directory for where the virtual hard disk image(vhdx) file will be created, in the command below. ( in place of &lt;code&gt;E:\VMs\WSLs\Arch\&lt;/code&gt; )&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Arch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;E:\VMs\WSLs\Arch\&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;\archlinux.tar.gz&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I keep all the custom WSL installations (vhdx files) inside this directory &lt;code&gt;E:\VMs\WSLs\&lt;/code&gt;. This is the way I keep them organised.&lt;/p&gt;

&lt;p&gt;For reference you can also watch this YouTube Tutorial by AgileDevArt -&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/CFWZqe5bkAE"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Post Installation
&lt;/h2&gt;

&lt;p&gt;From the previous part, you already know how to create and set user accounts and passwords in a linux command line. So, no need to fill this section with the same Instructions. Knowledge is easily transferable among similar Operating Systems.&lt;/p&gt;

&lt;p&gt;But, still there will be a problem that you will face which must be fixed!&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixing the Automounting Error
&lt;/h3&gt;

&lt;p&gt;When entering in your archlinux &lt;strong&gt;WSL&lt;/strong&gt; with the command &lt;code&gt;wsl -d Arch&lt;/code&gt; after a fresh install, it shall first print &lt;strong&gt;'Processing fstab with mount -a failed.'&lt;/strong&gt; in the console and then shall enter the bash shell of the arch distribution.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is fstab?
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;/etc/fstab&lt;/code&gt; is the configuration file which contains the information about all available partitions and indicates how and where they are mounted.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the reason behind this problem?
&lt;/h4&gt;

&lt;p&gt;From your base installation method (in the VM of course), you should know that the original base install in VirtualBox had some separate filesystem (partitions). Those partitions don't exist in WSL or now has a different filesystem UUID.&lt;/p&gt;

&lt;h4&gt;
  
  
  What to do now?
&lt;/h4&gt;

&lt;p&gt;Comment out or delete all the uncommented lines in &lt;code&gt;/etc/fstab&lt;/code&gt; because their corresponding filesystem partitions no longer exist in WSL.&lt;br&gt;&lt;br&gt;
After a full system reboot (I mean reboot your windows machine), the errors should disappear.&lt;/p&gt;

&lt;p&gt;Read More at &lt;a href="https://unix.stackexchange.com/a/780166/605989" rel="noopener noreferrer"&gt;unix.stackexchange.com&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;So at last you have the complete knowledge.&lt;/p&gt;

&lt;p&gt;From now on you can have any Linux distro you want inside WSL.&lt;/p&gt;

&lt;p&gt;You just basically need a tar, which you can obtain using a container or VM. (depending on the situation)&lt;/p&gt;

&lt;p&gt;If you found it useful, please consider to share this article with your other developer friends.&lt;/p&gt;

&lt;p&gt;Feel free to connect with me :)&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Thanks for reading! 🙏🏻 &lt;br&gt; Written with 💚 by &lt;a href="https://dev.to/ddebajyati"&gt;Debajyati Dey&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;&lt;a href="https://github.com/Debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tu7kfqhw7z1yzmng4ah.png" alt="My GitHub" width="40" height="39"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://www.linkedin.com/in/debajyati-dey/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femp5sh8d4fq0g89lqsia.png" alt="My LinkedIn" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://app.daily.dev/debajyatidey" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20akag0pdeq95u76k9e8.png" alt="My Daily.dev" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://peerlist.io/debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flscfsnjdwyhm803f7mlv.png" alt="My Peerlist" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://x.com/ddebajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0265bz6hmdfybuw0a605.png" alt="My Twitter" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Linux users are hackers! Happy Hacking! 🐱‍💻&lt;/p&gt;

</description>
      <category>linux</category>
      <category>archlinux</category>
      <category>tutorial</category>
      <category>bash</category>
    </item>
    <item>
      <title>Create Your Custom WSL from any Linux Distribution (Part-1)</title>
      <dc:creator>Debajyati Dey</dc:creator>
      <pubDate>Sun, 08 Dec 2024 14:11:29 +0000</pubDate>
      <link>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-1-51k1</link>
      <guid>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-1-51k1</guid>
      <description>&lt;h2&gt;
  
  
  Summary of the Content Prior to Reading the Article
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Ever wanted Arch or Void Linux as your WSL distro for Windows? Do you know that you can actually (YES ACTUALLY!!!) install any Linux distribution as your WSL distro? This guide covers how to import any Linux distro to WSL2 using a tar file. We'll use a Docker container to get the tar file, import it to WSL, and set up Void Linux as an example. Follow the steps to download the Docker image, export it to a tar file, and import it to WSL. We'll walk through post-installation configurations like creating user accounts, setting up default shell and user, updating the system, and making WSL accessible as a Windows desktop app. By the end, you'll have a fully functional, custom Linux distro on your Windows machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Initial Discussion
&lt;/h2&gt;

&lt;p&gt;If you run &lt;code&gt;wsl -l -o&lt;/code&gt; in your windows terminal (cmd or PowerShell), you'll see an output like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fits0lzxvpf0uh4asaabr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fits0lzxvpf0uh4asaabr.png" alt="List of valid distributions that can be installed using" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This list is really disappointing. I mean there are many more distros out there with different package management systems and different useful features. Like - fedora, Arch, Void, Artix, Alpine, etc.&lt;/p&gt;

&lt;p&gt;Now you may also think that the options are very limited in case of WSL. Which is not actually that true.&lt;/p&gt;

&lt;p&gt;If you even search at &lt;strong&gt;MS Store&lt;/strong&gt;, You'll see some third party linux distributions that are specifically developed for WSL (&lt;strong&gt;WSL Only Linux Distributions&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;While the &lt;strong&gt;ArchWSL&lt;/strong&gt; and &lt;strong&gt;Fedora WSL&lt;/strong&gt; at &lt;strong&gt;MS Store&lt;/strong&gt; may seem great at first before installing, these distros have often showed compatibility issues and sometimes very weird bugs; even conflicts with &lt;a href="https://scoop.sh" rel="noopener noreferrer"&gt;scoop&lt;/a&gt; or &lt;a href="https://chocolatey.org/" rel="noopener noreferrer"&gt;chocolatey&lt;/a&gt; apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install other distros then?
&lt;/h2&gt;

&lt;p&gt;WSL2 provides us a way to import any linux distro as a WSL instance from a tar file(or say backup) of the Linux OS in a machine.&lt;/p&gt;

&lt;p&gt;For example you have a laptop where you have fully installed some linux distribution as the operating system. Then you can use the tar command to make a compressed tar file replicating your whole OS starting from the &lt;code&gt;/&lt;/code&gt; (root) directory point as one file system.&lt;/p&gt;

&lt;p&gt;Now you can transfer the tar file in your windows machine using a USB drive. Next, you use the &lt;code&gt;--import&lt;/code&gt; flag of the wsl command and, a new WSL instance with a filesystem (virtual hard drive) &amp;amp; a provided name gets registered within the subsystem.&lt;/p&gt;

&lt;p&gt;Let's walk through a complete tutorial to get you covered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Void Linux in WSL
&lt;/h2&gt;

&lt;p&gt;Well by far the easiest way to get a tar file of an OS is to use a docker container. Follow the steps I describe below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Obtaining The TAR file (Archive)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;First of first, pull the official docker image of void linux from GitHub Container Registry. Make sure you already have a WSL instance (Ubuntu or openSUSE or any other) installed and setup for docker.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ghcr.io/void-linux/void-glibc-full
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;After pulling it should show up in the images list-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoffap3etkjbqd0d767y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoffap3etkjbqd0d767y.png" alt="installed docker images" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now run the container using -&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; ghcr.io/void-linux/void-glibc-full sh
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;It should successfully enter the voidlinux container. You can run any command to check if the shell is really working or not. like shown below -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o29aojgyhfzem9s9hs9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o29aojgyhfzem9s9hs9.png" alt="Running the void linux docker container in interactive mode" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next keeping this terminal instance alive without exiting the container, open another WSL terminal instance and run this command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82ncey2sqx686ugckent.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82ncey2sqx686ugckent.png" alt="List of running containers" width="800" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should be able to see the container included in &lt;code&gt;running containers list&lt;/code&gt; like in the image.&lt;/p&gt;

&lt;p&gt;Run the following commands below -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyd3rpblx8ebowloqicu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyd3rpblx8ebowloqicu.png" alt="Getting The Running Container ID" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've got our running container ID. Yoohooo!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now run this final command to obtain the tar file. Change the path of the tar file based on your choice. In my case it is the VMs folder in my E: drive. (You should read first before running any command)&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="nv"&gt;$dockerContainerID&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /mnt/e/VMs/voidlinux.tar
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;You can stop the container and exit the WSL terminal afterwards.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using the TAR file
&lt;/h3&gt;

&lt;p&gt;If you go to the path where the tar file was created in windows explorer, you will see the tar file there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5lcxgzfo2n49rl52abk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5lcxgzfo2n49rl52abk.png" alt="seeing the voidlinux tar file" width="781" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's create a new folder 'WSLs'. Move the tar file in there. In there create a new folder Void.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpexzzz9s5wg2ujfpa8y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpexzzz9s5wg2ujfpa8y1.png" alt="Moved the tar file in a specific directory" width="759" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open the folder 'WSLs' in your terminal (cmd or pwsh), and run this command -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Void&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;E:\VMs\WSLs\Void\&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;\voidlinux.tar&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The first argument after &lt;code&gt;wsl --import&lt;/code&gt; is the one you choose to be the name of the distribution that is going to be imported.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The 2nd argument is the absolute path of the directory where the virtual hard disk image file(.vhdx) is going to created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The 3rd argument is the path of the tar file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After this command is successfully executed, you'll see the filesystem of Void in the Linux subsystem (open file explorer to see).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff343oe46tb74d2gs5v9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff343oe46tb74d2gs5v9u.png" alt="Void Linux successfully registered" width="759" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yeah! Cool!&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;💡Caution!&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Don't delete the (.vhdx) file just got created. If you think now you can delete the virtual hard disk image file because Void is now imported and currently exist in your Linux subsystem, then you are totally wrong. The filesystem you can see (just as in the above image) is only accessible when you have the vhdx file and!!! exists in the same path.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You've successfully installed Void Linux in your Linux Subsystem. Huge Congratulations! ✨✨✨&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Do After Installing a Custom Distribution
&lt;/h2&gt;

&lt;p&gt;Now if you are thinking, you're all done, then you're again making a mistake.&lt;/p&gt;

&lt;p&gt;How?&lt;/p&gt;

&lt;p&gt;Because the way we installed the OS in the subsystem, it was not like how automated we get the online available WSL distros. Generally, when you install ubuntu, kali or OpenSUSE via the commandline or MS Store, it automatically creates a user account (makes it the default one) for you with a password, with a bunch of configurations behind the scenes.&lt;/p&gt;

&lt;p&gt;Now because we installed the OS in our subsystem with a bare import, we got nothing of setups out of the box. There's only one user account, which is the root user.&lt;/p&gt;

&lt;p&gt;We will create a user account, provide it a password and add it in the &lt;code&gt;sudoers&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Open the void linux shell with &lt;code&gt;wsl -d Void&lt;/code&gt; in your powershell/cmd terminal. (Don't worry we will make a convenient way at the end for launching the app or say, shell)&lt;/p&gt;

&lt;p&gt;Once it opens, you'll run the following commands.&lt;/p&gt;

&lt;p&gt;But before creating user accounts let's perform some prerequisites.&lt;/p&gt;

&lt;p&gt;First, run -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clear
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you'll be shocked!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldrj8av6n6zx3peoj8yf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldrj8av6n6zx3peoj8yf.png" alt="clear command NOT found" width="234" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because our OS is imported from a base install of void linux container. We don't have all important tools yet. Clearly, the clear command shouldn't work because we don't have 'ncurses" installed. "ncurses" is a &lt;strong&gt;library&lt;/strong&gt; that provides &lt;strong&gt;terminal&lt;/strong&gt; handling and user interface &lt;strong&gt;functions for&lt;/strong&gt; C &lt;strong&gt;programs. Void Linux uses the xbps package manager to install, update and remove apps/softwares.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzhn0plhmb3xw0r7dw8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzhn0plhmb3xw0r7dw8i.png" alt="Info about the XBPS Package Manager" width="800" height="515"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; ncurses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga68sueub7id6ghm2ef1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga68sueub7id6ghm2ef1.png" alt="installing ncurses" width="800" height="982"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the clear command should work successfully.&lt;/p&gt;

&lt;p&gt;The second step should be updating the system. As Void Linux is a rolling release distribution, it gets frequent updates. You should update the system as often as you can (like every day if possible). Update it with -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-Syu&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-S&lt;/code&gt; flag means sync, &lt;code&gt;-y&lt;/code&gt; means yes and &lt;code&gt;-u&lt;/code&gt; means update. We combine these flags together and write it as &lt;code&gt;-Syu&lt;/code&gt; . The system update may take some time if you are having poor internet connection.&lt;/p&gt;

&lt;p&gt;After the full system upgrade, install &lt;code&gt;less&lt;/code&gt; and &lt;code&gt;bash&lt;/code&gt;. Bash (Bourne Again Shell) is not currently installed in your system. It is not provided by default. The shell you are using to run commands is &lt;code&gt;sh&lt;/code&gt; (Bourne Shell), the predecessor of Bash.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; less bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;💯 😎 Pro Tip&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pipe those commands into less which take more height in their STDOUT than your terminal height. Thus, you would be able to scroll through the output with arrow keys and search words with a forward slash (/). Particularly useful when viewing help texts of a program. No need to touch the mouse!&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now change the default shell from SH to BASH. -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;chsh &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;which bash&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you exit the shell and re-enter in it, you'll see that the prompt string of the commandline has been changed indicating that, - the terminal is running in its default shell has been changed to bash from sh.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjdmvnyq43san89m0x3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjdmvnyq43san89m0x3z.png" alt="default $PS1 of bash" width="141" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it should look something like above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a user account
&lt;/h3&gt;

&lt;p&gt;Now it is the time to create a user account which will be the default user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; &lt;span class="nb"&gt;sudo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install sudo in your voidlinux system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr9jmp23dx60xl339poy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr9jmp23dx60xl339poy.png" alt="Installing sudo" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now run this command below, to create a user account with home directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; &amp;lt;your-username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;replace &lt;code&gt;&amp;lt;your-username&amp;gt;&lt;/code&gt; with the username you want.&lt;/p&gt;

&lt;p&gt;Run the below command to list all available groups.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /etc/group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxh1yfp335aj5ytf3sys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxh1yfp335aj5ytf3sys.png" alt="Listing all the currently available groups in the system" width="257" height="764"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now append the groups you want, to your user using the &lt;code&gt;usermod&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;This is the syntax -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; &amp;lt;group_name&amp;gt; &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Heh! BTW you can place multiple groups in place of &lt;code&gt;&amp;lt;group_name&amp;gt;&lt;/code&gt; by keeping them separated by commas. For example, I think I would do this -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; wheel,audio,video,kvm,tty,storage,plugdev,lp,dialout,users &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For satisfaction, you can run -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;groups&lt;/span&gt; &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to see that the user actually got access to the groups we specified in the command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z8oyi54ju0hgwyzprnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z8oyi54ju0hgwyzprnx.png" alt="Displaying all the groups, the newly created user has access to" width="675" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In case you don't know, -&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Groups&lt;/th&gt;
&lt;th&gt;Their Meaning ( Usecase )&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;wheel&lt;/td&gt;
&lt;td&gt;to grant users the ability to execute commands as the superuser (root) using &lt;code&gt;sudo&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tty&lt;/td&gt;
&lt;td&gt;related to terminal devices. Group for access to terminal devices if needed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;storage&lt;/td&gt;
&lt;td&gt;This group is typically used for users who need access to storage devices, like - external drives.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;audio&lt;/td&gt;
&lt;td&gt;Grants access to audio devices.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;video&lt;/td&gt;
&lt;td&gt;Grants access to video devices.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dialout&lt;/td&gt;
&lt;td&gt;Provides access to serial ports.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;lp&lt;/td&gt;
&lt;td&gt;Grants access to printer devices.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kvm&lt;/td&gt;
&lt;td&gt;for users who need to manage virtual machines using KVM (Kernel-based Virtual Machine).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;plugdev&lt;/td&gt;
&lt;td&gt;Allows access to removable devices like USB drives.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;users&lt;/td&gt;
&lt;td&gt;This is a general group for regular users.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Neat! We're almost done creating a regular user account. But the last crucial step is not done yet!&lt;/p&gt;

&lt;p&gt;We need to add the user to the sudoers file so that it can get superuser access (admin privilege) using the sudo command. Also, for that we should first set a password to both the regular and root user. You can set the same password for both users if the WSL we installed is not going to be used/touched by anyone else whom you don't trust, and you also tend to forget things.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting A Password
&lt;/h3&gt;

&lt;p&gt;To add or change passwords for a user we need to have the &lt;code&gt;passwd&lt;/code&gt; utility installed in the system. If not already available, install it -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; passwd &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;type &lt;code&gt;passwd&lt;/code&gt; and you'll be prompted for setting the password for the root user.&lt;/p&gt;

&lt;p&gt;Next, -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;passwd &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;username&amp;gt;&lt;/code&gt; with the user we just created now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding user to Sudoers File
&lt;/h3&gt;

&lt;p&gt;Now we need to edit the &lt;code&gt;sudoers&lt;/code&gt; file to properly grant superuser access to our user. For that we need an editor. If you are comfortable with nvim install it otherwise install nano for text editing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; neovim
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; nano
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now do sudo visudo to edit the sudoers file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;EDITOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nvim &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; visudo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace nvim with nano if you want to use nano.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x42atgrs05wmz2lal3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x42atgrs05wmz2lal3x.png" alt="user privilege specification in the sudoers file" width="712" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Find the uncommented line shown in the image, in your file. And write this line (look below) under there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;username&amp;gt; &lt;span class="nv"&gt;ALL&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;ALL&lt;span class="o"&gt;)&lt;/span&gt; ALL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It would look like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop2pziayaplr0fjubczr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop2pziayaplr0fjubczr.png" alt="Specifying the user privilege of the new user in the sudoers file" width="385" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now save the file and exit.&lt;/p&gt;

&lt;p&gt;Finally set the new user as the default user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;myUsername&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;username&amp;gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"[user]&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;default=&lt;/span&gt;&lt;span class="nv"&gt;$myUsername&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/wsl.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are all set. Exit the Linux shell and terminate the distro by running -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--terminate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Void&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then open it again with -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Void&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, yeah! We did it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Optional Extra Configurations
&lt;/h2&gt;

&lt;p&gt;As of now without any configuration, the prompt string (PS1) would look like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7pel1t6f3g3mvn3zzc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7pel1t6f3g3mvn3zzc7.png" alt="Default prompt string of the commandline in the session of the current user" width="152" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;which is really reallly ugly to me. I mean, I would need to enter pwd every time whenever I need to check which directory I am currently in, which absolutely sucks!&lt;/p&gt;

&lt;p&gt;So, let's change the prompt string.&lt;/p&gt;

&lt;p&gt;open your .bashrc file ( assuming you are using nvim )&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nvim ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to the last line.&lt;/p&gt;

&lt;p&gt;Add this line of code to the next line to change the prompt string.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;PS1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[32m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;[&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[31m&lt;/span&gt;&lt;span class="se"&gt;\]\u\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[33m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[32m&lt;/span&gt;&lt;span class="se"&gt;\]\h\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[36m&lt;/span&gt;&lt;span class="se"&gt;\]\w\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[32m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;]&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[30;46m&lt;/span&gt;&lt;span class="se"&gt;\]\\&lt;/span&gt;&lt;span class="nv"&gt;$\&lt;/span&gt;&lt;span class="s2"&gt;[&lt;/span&gt;&lt;span class="se"&gt;\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt; "&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I know the string may seem unreadable obfuscated some nonsensical gibberish, but it is only because of the ANSI Escape codes heavily used here.&lt;/p&gt;

&lt;p&gt;Save the file, exit and do&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to make the changes take effect and HURRAYY! Now you have an elegant and useful prompt string before the commandline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj7ged46xrdroge6mk8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj7ged46xrdroge6mk8p.png" alt="Voidlinux WSL set up with a fancy colorful prompt string" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also if you read man pages a lot you will need to have the MANPATH environment variable set on the startup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lwqjr7yzti3iqeqa4eh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lwqjr7yzti3iqeqa4eh.png" alt="Setting the MANPATH env variable in the .bash_profile file" width="564" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add this line (as shown in the image above)-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;MANPATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/man
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you have &lt;strong&gt;man&lt;/strong&gt; and &lt;strong&gt;man-db&lt;/strong&gt; installed, you can successfully access man pages from the commandline with the &lt;strong&gt;man&lt;/strong&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping this WSL as a Desktop App
&lt;/h2&gt;

&lt;p&gt;As we all clearly understand, launching the WSL instance by opening any cmd shell and running &lt;code&gt;wsl -d Void&lt;/code&gt; is not a very convenient approach.&lt;/p&gt;

&lt;p&gt;Most probably after a reboot of your PC, you'll see a new terminal profile has been automatically added, in which our void linux shell resides. If not, then create a new terminal profile in your windows terminal for void.&lt;/p&gt;

&lt;p&gt;Now go through the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Right click in your Desktop Background and select the option of new desktop shortcut. You'll see a popup like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx6l6e2lfdqw6tulxthm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx6l6e2lfdqw6tulxthm.png" alt="Typing in the given input area, the command/location of the file/process we are creating a desktop shortcut for" width="614" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have to write the appropriate command in the empty input area provided, that will open the newly created terminal profile of void linux in its $HOME directory of the default user.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, type - &lt;code&gt;C:\Users\user\AppData\Local\Microsoft\WindowsApps\wt.exe nt -p Void --tabColor #27e336&lt;/code&gt; in there and click &lt;strong&gt;next&lt;/strong&gt;. (make sure the path of &lt;code&gt;wt.exe&lt;/code&gt; is correct. If you have &lt;code&gt;wt&lt;/code&gt; which is windows terminal in a different path, then type that one)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now type the name of the shortcut as you want. I'm giving it the name - &lt;strong&gt;void&lt;/strong&gt; Now click &lt;strong&gt;Finish.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congrats! Another milestone achieved! Now you can access this WSL from your windows search bar directly by typing void. How amazing!&lt;/p&gt;

&lt;p&gt;I suggest you change the icon of the app to anything else that will enable it to get easily caught in sight (you may need to download an image and convert it to an ico file if you want it to have the void linux logo as the icon).&lt;/p&gt;

&lt;p&gt;This is how my desktop app look like -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft89g3d00keaneka33c3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft89g3d00keaneka33c3s.png" alt="Void Linux Desktop Shortcut" width="760" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, that's all! Really! You finally got a new custom Linux distribution in your subsystem which is not readily available in the MS Store or online WSL registries, configured and ready to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Soo, how are you feeling! Tell me in the comments!&lt;/p&gt;

&lt;p&gt;If must feel chaotic good to get access to most of the needed cutting edge development softwares/tools right within the terminal. One Package Manager to rule them all. If you are tired of getting old or outdated packages in Debian/Ubuntu then this is going to be an overwhelming experience.&lt;/p&gt;

&lt;p&gt;In case you couldn't follow the steps to produce the tar file, or somehow faced any kind of trouble and thus not getting the tar (archive). Don't worry.&lt;/p&gt;

&lt;p&gt;I am attaching the mega link of the Void Linux tar file I created for you so that you can at least try it out! ;)&lt;/p&gt;

&lt;p&gt;This is the decrypt key of the mega file - &lt;code&gt;3uMXrmDWP6WUb6kKjzb5B0Zc-Qh1w5oLE2LbZ4lOzhA&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Thank you for giving this article a read!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mega.nz/file/XoYDyA7B#3uMXrmDWP6WUb6kKjzb5B0Zc-Qh1w5oLE2LbZ4lOzhA" rel="noopener noreferrer"&gt;Mega Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you found it useful, please consider to share this article with your other developer friends.&lt;/p&gt;

&lt;p&gt;Feel free to connect with me :)&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Thanks for reading! 🙏🏻 &lt;br&gt; Written with 💚 by &lt;a href="https://dev.to/ddebajyati"&gt;Debajyati Dey&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;&lt;a href="https://github.com/Debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tu7kfqhw7z1yzmng4ah.png" alt="My GitHub" width="40" height="39"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://www.linkedin.com/in/debajyati-dey/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femp5sh8d4fq0g89lqsia.png" alt="My LinkedIn" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://app.daily.dev/debajyatidey" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20akag0pdeq95u76k9e8.png" alt="My Daily.dev" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://peerlist.io/debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flscfsnjdwyhm803f7mlv.png" alt="My Peerlist" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://x.com/ddebajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0265bz6hmdfybuw0a605.png" alt="My Twitter" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Linux users are hackers! Happy Hacking! 🐱‍💻&lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>bash</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>How to Create a Number-Guessing Game in Python</title>
      <dc:creator>Sophia Iroegbu</dc:creator>
      <pubDate>Tue, 26 Nov 2024 12:59:21 +0000</pubDate>
      <link>https://dev.to/studio1hq/how-to-create-a-number-guessing-game-in-python-3kbd</link>
      <guid>https://dev.to/studio1hq/how-to-create-a-number-guessing-game-in-python-3kbd</guid>
      <description>&lt;p&gt;Hello there! 👋&lt;/p&gt;

&lt;p&gt;In this guide, you will learn how to build a number-guessing game using basic Python concepts, such as loops, if-else statements, handling inputs, and more. This is inspired by the &lt;a href="https://roadmap.sh/projects/number-guessing-game" rel="noopener noreferrer"&gt;Number guessing game project&lt;/a&gt; in the Roadmap projects section.&lt;/p&gt;

&lt;p&gt;Let’s get started! 😎&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the game’s function
&lt;/h2&gt;

&lt;p&gt;We would need to create a function to handle the generation of the random number for the user to guess using Python’s random module.&lt;/p&gt;

&lt;p&gt;Start by importing the module then create the function using &lt;code&gt;random.randint()&lt;/code&gt; within the module to generate a random number between 1 - 100, this is the number the player has to guess and it will be stored in a variable, &lt;code&gt;number_to_guess&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, set &lt;code&gt;guessed_correctly&lt;/code&gt; var to false to stop the game once the player guessed the right number and set an attempts_limit to make the game more difficult.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;number_guessing_game&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attempts_limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;number_to_guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;guessed_correctly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
  &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, use a print statement to welcome your player with some messages and instructions on how to play the game. You can customize this to your preference.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NB: This should be within the function.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Welcome to Number guessing game&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I have selected a number from 1-100, can you guess it?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating the guessing loop
&lt;/h2&gt;

&lt;p&gt;Next, we need to create the core of the game. This will manage the loop that continues until the player guesses the number correctly or reaches their guess limit.&lt;/p&gt;

&lt;p&gt;Start by using a while loop to increase the attempt count with each try. If the player doesn't guess correctly, they should be given another turn to guess as long as they haven't exceeded their limit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;attempts_limit&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;guessed_correctly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
     &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Please add your guess: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
     &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, use an if-else statement to compare the player's guess with the random number and provide feedback. If the guess is lower than the correct number, print "too low." If it's too high, print "too high." If it matches the correct number, set &lt;code&gt;guessed_correctly&lt;/code&gt; to True, break the loop, and print a congratulations message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too low!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;guessed_correctly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Congratulations, you guessed the number in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;attempts&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; attempts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's add an extra layer of error handling. Users can be unpredictable, and many might try to break your program. For example, if a player decides to use a letter or a decimal number to guess, the program will stop unexpectedly. That's why we need this extra layer.&lt;/p&gt;

&lt;p&gt;Using the try-except block, we can catch such an error. It should only accept whole numbers and should send an error message and stop if the player decides to use something else.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;attempts_limit&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;guessed_correctly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Please add your guess: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;

      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too low!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;guessed_correctly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Congratulations, you guessed the number in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;attempts&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; attempts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Oops! This is not a valid number, please a whole number&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that's done, we move on to the final step. If the player runs out of guesses and hasn't guessed the correct number, display a message saying "Game over" and inform them that they are out of guesses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;guessed_correctly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are out of guesses, the correct guess was &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Game over, Thanks for playing!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing your game
&lt;/h2&gt;

&lt;p&gt;Now that’s all done! Let’s test our game and see if it works. Also, remember to call your function at the bottom of your program to run your program.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;number_guessing_game&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytvfo0x8gpl5yf7kuua3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytvfo0x8gpl5yf7kuua3.png" alt=" " width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there you have it! A simple yet fun game in Python that you can create as a beginner. I hope this helps you become more comfortable with key programming concepts like loops, conditionals, and random numbers. &lt;/p&gt;

&lt;p&gt;The source code can be found &lt;a href="https://github.com/Sophyia7/Python-Tutorials" rel="noopener noreferrer"&gt;here&lt;/a&gt;. If you prefer the video version of this guide, check it out:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/MrTWan2td28"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>python</category>
      <category>beginners</category>
    </item>
    <item>
      <title>F-Strings in Python: What They Are and How to Use Them</title>
      <dc:creator>Sophia Iroegbu</dc:creator>
      <pubDate>Tue, 27 Aug 2024 17:29:57 +0000</pubDate>
      <link>https://dev.to/studio1hq/f-strings-in-python-what-they-are-and-how-to-use-them-4bk9</link>
      <guid>https://dev.to/studio1hq/f-strings-in-python-what-they-are-and-how-to-use-them-4bk9</guid>
      <description>&lt;p&gt;F-strings (Formatted strings) in Python are an easy way to add expressions in Strings. It was first introduced in &lt;a href="https://docs.python.org/3/whatsnew/3.6.html" rel="noopener noreferrer"&gt;Python 3.6&lt;/a&gt;, making string formatting much more readable and easy.&lt;/p&gt;

&lt;p&gt;In this guide, we will understand f-strings and why they are so much better than the standard formatting method, &lt;code&gt;str.format()&lt;/code&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  What are F-strings?
&lt;/h2&gt;

&lt;p&gt;F-string is a formatting technique introduced to Python that makes it so much easier to play around with expressions and strings; it is usually defined by the prefix, &lt;code&gt;f&lt;/code&gt;, and using the curly brackets, &lt;code&gt;{}&lt;/code&gt; to format your value or to interpolate.&lt;/p&gt;

&lt;p&gt;The usefulness of this type of string formatting lies in the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Readability:&lt;/strong&gt; F-strings lets you embed expressions directly within the string, which makes it easy to read and understand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance:&lt;/strong&gt; F-strings are faster than your average string formatting, like &lt;code&gt;str.format()&lt;/code&gt; or &lt;code&gt;%&lt;/code&gt; because f-strings do not need any overhead of method calls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy to Use:&lt;/strong&gt; With f-strings, you do not need to remember different formatting methods or symbols; you only need to prefix your string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility:&lt;/strong&gt; When using f-strings, expressions defined can include any valid Python expression inside the curly brackets an not just variables.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Type safety:&lt;/strong&gt; F-strings automatically convert your expressions to strings, thereby reducing the chance of type conversion errors.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are a few reasons you should consider f-strings when dealing with formatting. Let's look at some code examples to show how this awesome formatting method works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is str.format() all about?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;str.format()&lt;/code&gt; is the OG formatting method. This method is still useful when you need to reuse your format technique more than once with different values or data types or if you are working on a project that uses Python versions earlier than 3.6. &lt;/p&gt;

&lt;p&gt;Aside from using it with older Python version projects, &lt;code&gt;str.format()&lt;/code&gt; is also useful when dealing with more advanced formatting options like padding, number formatting, alignment, etc, or when formatting dictionaries or objects, making dealing with structured data so easy.&lt;/p&gt;

&lt;p&gt;While this method is useful, it is also very slow because of many function calls when formatting values. &lt;code&gt;str.format()&lt;/code&gt; is just as useful as f-strings, depending on your use case. &lt;/p&gt;

&lt;p&gt;Let's look at a code example of how str.format() works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sophia&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;

&lt;span class="c1"&gt;# Using str.format() to format the string
&lt;/span&gt;&lt;span class="n"&gt;formatted_string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My name is {} and I am {} years old.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;formatted_string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your response will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;My name is Sophia and I am 30 years old.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Basic Syntax of F-string
&lt;/h3&gt;

&lt;p&gt;As stated earlier, f-strings are prefixed f before the string, and inside the string, you add the variable or the expression within curly brackets, &lt;code&gt;{}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here's a basic syntax that shows how variables can be defined within the f-string.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sophia&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;country&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Nigeria&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;greetings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello, I am &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;capitalize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; and I am &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;country&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;greetings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your response will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I am Sophia and I am from Nigeria
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adding Expressions within F-strings
&lt;/h3&gt;

&lt;p&gt;Now, let’s look at adding expressions inside f-strings. You can also add the expressions within curly brackets.&lt;/p&gt;

&lt;p&gt;Here’s a code example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The sum of &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; and &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; is &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your response will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The sum of 10 and 24 is 34
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Formatting Numbers within F-strings
&lt;/h3&gt;

&lt;p&gt;Let’s take another look at how we can use f-strings. You can format numbers within the curly braces.&lt;/p&gt;

&lt;p&gt;Here's a code example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;500.287018&lt;/span&gt; 

&lt;span class="n"&gt;formatted_value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; in two decimal places is &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;formatted_value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your response will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The 500.287018 in two decimal places is 500.29
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using F-strings in multiple lines
&lt;/h3&gt;

&lt;p&gt;Lastly, let’s see if the f-strings can be used in multiple lines using triple quotes.&lt;/p&gt;

&lt;p&gt;Here's a code example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Mercedes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;year&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2009&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Name : &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
Year the car was made: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;year&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your response will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
Name : Mercedes
Year the car was made: 2009

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Voila! Congrats you now understand what f-strings are and how this is a great formatting method for your projects.&lt;/p&gt;

&lt;p&gt;In this guide, I focused on formatting strings and numbers, but f-strings can be used to format all data types in Python. &lt;/p&gt;

&lt;p&gt;If you prefer the video version of this guide, check it out:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/k6ZEKNHQIuo"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>python</category>
      <category>beginners</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>I Wrote A Batch Script to Enhance My Workflow on Command Prompt</title>
      <dc:creator>Debajyati Dey</dc:creator>
      <pubDate>Mon, 12 Aug 2024 19:34:47 +0000</pubDate>
      <link>https://dev.to/studio1hq/i-wrote-a-batch-script-to-enhance-my-workflow-on-command-prompt-2476</link>
      <guid>https://dev.to/studio1hq/i-wrote-a-batch-script-to-enhance-my-workflow-on-command-prompt-2476</guid>
      <description>&lt;p&gt;So yess, wonderful readers, I just wrote a batch script, not bash script!&lt;/p&gt;

&lt;p&gt;Batch is the language in which scripts for windows command prompt(cmd.exe) are written. &lt;/p&gt;

&lt;p&gt;What is special about this script?&lt;/p&gt;

&lt;p&gt;That is what I'm going to discuss.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Needed The Script (AKA The Problem)
&lt;/h2&gt;

&lt;p&gt;While WSL is a powerful tool for Windows users who want to leverage Linux commands and environment, it's often cumbersome to open the WSL terminal every time you want to run only one command, while you are actually working on powershell or command prompt.&lt;/p&gt;

&lt;p&gt;I know what u will say now. You'll say, "Use the wsl cli bro, that's it!"&lt;br&gt;
But actually it's not just the cli that will always get you covered! &lt;/p&gt;

&lt;p&gt;Suppose you are used to with some commands in your linux environment that are not any linux binary, they're some functions and aliases that are sourced when your shell is initialized. &lt;/p&gt;

&lt;p&gt;For example this bash function (very convenient)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;extract &lt;span class="o"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[1;31mNo file specified&lt;/span&gt;&lt;span class="se"&gt;\n\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt; 1&amp;gt;&amp;amp;2&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[1;31mFile '%s' not found&lt;/span&gt;&lt;span class="se"&gt;\n\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 1&amp;gt;&amp;amp;2&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;ext&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;file&lt;/span&gt;&lt;span class="p"&gt;#*.&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;extractor&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ext&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in
        &lt;/span&gt;tar.bz2 &lt;span class="p"&gt;|&lt;/span&gt; tbz2 &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"tar"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"xvf"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        tar.gz &lt;span class="p"&gt;|&lt;/span&gt; tgz&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"tar"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"xzvf"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        tar.xz&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"tar"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Jxvf"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        bz2&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bunzip2"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        rar&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"unar"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"-d"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        gz&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gunzip"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        zip&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"unzip"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        xz&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"unxz"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        7z&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"7z"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"x"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        Z&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nv"&gt;extractor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"uncompress"&lt;/span&gt;
        &lt;span class="p"&gt;;;&lt;/span&gt;
        &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[1;31mUnsupported file type: %s&lt;/span&gt;&lt;span class="se"&gt;\n\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 1&amp;gt;&amp;amp;2&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;return &lt;/span&gt;1
        &lt;span class="p"&gt;;;&lt;/span&gt;
    &lt;span class="k"&gt;esac&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$extractor&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;$options&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[1;31mError extracting '%s'&lt;/span&gt;&lt;span class="se"&gt;\n\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 1&amp;gt;&amp;amp;2&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I can extract any kind of (almost) archive file with this &lt;code&gt;extract&lt;/code&gt; function, combining all necessary tool. One function to extract them all. This and one other function is in my &lt;code&gt;.bash_functions&lt;/code&gt; file in my &lt;code&gt;$HOME&lt;/code&gt; directory of &lt;strong&gt;WSL&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;or these aliases -&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;gla&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'git log --oneline --graph --all'&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;la&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'exa -a --icons --group-directories-first'&lt;/span&gt; &lt;span class="c"&gt;# exa needs to be preinstalled&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;ll&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'exa -alhF --icons'&lt;/span&gt; &lt;span class="c"&gt;# exa needs to be preinstalled&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;cdf&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'cd $(fd -t d | fzf)'&lt;/span&gt; &lt;span class="c"&gt;# fd-find and fzf needs to be preinstalled &lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;fvim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'nvim $(fzf --preview="bat --color=always {}" --bind shift-up:preview-page-up,shift-down:preview-page-down)'&lt;/span&gt; &lt;span class="c"&gt;# nvim, bat and fzf needs to be preinstalled&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;ff&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'fzf --preview=less --bind shift-up:preview-page-up,shift-down:preview-page-down)'&lt;/span&gt; &lt;span class="c"&gt;# fzf and less needs to be preinstalled&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;These are the most notable aliases of mine ( I guess so ), which exist in the &lt;code&gt;.bash_aliases&lt;/code&gt; of my &lt;code&gt;$HOME&lt;/code&gt; directory of &lt;strong&gt;WSL&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The wsl cli is great for running linux binaries installed in the WSL system, outside the linux environment(WSL) for example in command prompt. &lt;/p&gt;

&lt;p&gt;But none of provided options can make you run a custom command outside it. &lt;/p&gt;
&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1st Approach
&lt;/h3&gt;

&lt;p&gt;I thought, "Let's try to replicate the commands in windows..."&lt;br&gt;
But soon I realized that this is highly inconvenient because though powershell has better syntax and configuration option, although it has functions that almost written the same way as in bash, replicating the custom commands( those functions and aliases ) is going to be really difficult &amp;amp; it would need to install the required softwares in windows.&lt;/p&gt;

&lt;p&gt;And for cmd, just don't even think about it.&lt;br&gt;
Also powershell and cmd doesn't support aliases like bash/zsh. Lastly, I couldn't find any init script like &lt;code&gt;.bashrc&lt;/code&gt; for command prompt (probably no such thing exists, if does then please let me know).&lt;/p&gt;

&lt;p&gt;Now you may again say, "Hey if you want aliases, why just not use doskey in cmd?"&lt;br&gt;
&lt;strong&gt;Again&lt;/strong&gt;, &lt;strong&gt;DOSKEY&lt;/strong&gt; is another highly inconvenient invention of windows devs. It is a command that can make macros for the interactive cli. It is inconvenient Because - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;DOSKEY&lt;/strong&gt; macros cannot be used on either side of a pipe: Both &lt;code&gt;someMacro|findstr '^'&lt;/code&gt; and &lt;code&gt;dir|someMacro&lt;/code&gt; fail.&lt;/li&gt;
&lt;li&gt;They cannot be used within a FOR /F commands: &lt;code&gt;for /f %A in ('someMacro') do ...&lt;/code&gt; fails.&lt;/li&gt;
&lt;li&gt;Macros written with &lt;strong&gt;DOSKEY&lt;/strong&gt;, must be wrapped with % ( e.g. - if the name of the macro is &lt;code&gt;grep&lt;/code&gt; then I would need to run it as %grep% ).
For more info refer to this StackExchange answer.
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://superuser.com/questions/560519/how-to-set-an-alias-in-windows-command-line/560558#560558" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsuperuser.com%2FContent%2FSites%2Fsuperuser%2FImg%2Fapple-touch-icon%402.png%3Fv%3De869e4459439" height="316" class="m-0" width="316"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://superuser.com/questions/560519/how-to-set-an-alias-in-windows-command-line/560558#560558" rel="noopener noreferrer" class="c-link"&gt;
            How to set an alias in Windows Command Line? - Super User
          &lt;/a&gt;
        &lt;/h2&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsuperuser.com%2FContent%2FSites%2Fsuperuser%2FImg%2Ffavicon.ico%3Fv%3D4852d6fb3f5d" width="32" height="32"&gt;
          superuser.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;So yeah I stopped there after researching about it. &lt;/p&gt;

&lt;h3&gt;
  
  
  2nd Approach
&lt;/h3&gt;

&lt;p&gt;Here comes the good part. I realized, replicating commands will be waste of time. So let me try some way I can access every single command of the wsl I have, in command prompt working the exact same way. If there is no init script, I must make a command which can execute the bash functions and aliases along with all binaries. &lt;/p&gt;

&lt;p&gt;So, I decided to write a batch script. &lt;/p&gt;

&lt;h4&gt;
  
  
  Why not powershell script?
&lt;/h4&gt;

&lt;p&gt;Because powershell is slower than cmd. Even if it will be easier to write the script in powershell, as long as I know I need to write only 1 script &amp;amp; probably only 1 function, I think writing script for cmd is worth the try. And, along the way I will learn something new!&lt;br&gt;
Yoohoo!&lt;/p&gt;

&lt;h4&gt;
  
  
  Steps
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Created a directory 'bin' in my user home directory. Added the path to bin in the environment variables.&lt;/li&gt;
&lt;li&gt;Created a file &lt;code&gt;runbash.bat&lt;/code&gt; inside bin. This will be the script.&lt;/li&gt;
&lt;li&gt;The syntax of batch is otherworldly to me. Hold on. Let's see the code of the script step by step. I am trying to explain the best I can. &lt;/li&gt;
&lt;li&gt;1st line - &lt;code&gt;@echo off&lt;/code&gt;. It will prevent command output from cluttering the console, while the script runs.&lt;/li&gt;
&lt;li&gt;2nd line - &lt;code&gt;setlocal enabledelayedexpansion&lt;/code&gt;. This is for handling variables dynamically within loops. It enables delayed expansion, which allows the use of variables within the same command line (using !VAR! instead of %VAR%).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;set CMDLINE=&lt;/code&gt; initializes an empty variable named &lt;code&gt;CMDLINE&lt;/code&gt;, which will be used to accumulate the command line arguments.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;:loop&lt;/code&gt; marks the beginning of a loop.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;if "%~1"=="" goto end&lt;/code&gt; checks if the first argument (&lt;code&gt;%~1&lt;/code&gt;) is empty. If it is, the script jumps to the &lt;code&gt;:end&lt;/code&gt; label and stops processing arguments.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;set CMDLINE=!CMDLINE! %~1&lt;/code&gt; appends the current argument (&lt;code&gt;%~1&lt;/code&gt;) to the &lt;code&gt;CMDLINE&lt;/code&gt; variable, building the command line.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;shift&lt;/code&gt; shifts the command line arguments to the left, so &lt;code&gt;%2&lt;/code&gt; becomes &lt;code&gt;%1&lt;/code&gt;, &lt;code&gt;%3&lt;/code&gt; becomes &lt;code&gt;%2&lt;/code&gt;, and so on. This effectively removes the first argument and allows the loop to process the next one.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;goto loop&lt;/code&gt; repeats the loop to process the next argument.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;:end:&lt;/code&gt; marks the end of the loop, where all arguments have been processed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bash -ic "!CMDLINE!"&lt;/code&gt; executes the accumulated command (&lt;code&gt;CMDLINE&lt;/code&gt;) in the default Bash shell, which the bash of the default wsl. Using the &lt;code&gt;-i&lt;/code&gt; (interactive) option to ensure it runs in an interactive shell. The &lt;code&gt;-c&lt;/code&gt; option tells Bash to execute the command string.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally the script would be this.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Script
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight batchfile"&gt;&lt;code&gt;@echo &lt;span class="na"&gt;off&lt;/span&gt;
&lt;span class="nb"&gt;setlocal&lt;/span&gt; &lt;span class="na"&gt;enabledelayedexpansion&lt;/span&gt;
&lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="kd"&gt;CMDLINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;span class="nl"&gt;:loop&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="err"&gt;%&lt;/span&gt;&lt;span class="s2"&gt;~1"&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="k"&gt;goto&lt;/span&gt; &lt;span class="kd"&gt;end&lt;/span&gt;
&lt;span class="kd"&gt;set&lt;/span&gt; &lt;span class="kd"&gt;CMDLINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;!CMDLINE!&lt;/span&gt; &lt;span class="err"&gt;%&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="nb"&gt;shift&lt;/span&gt;
&lt;span class="k"&gt;goto&lt;/span&gt; &lt;span class="kd"&gt;loop&lt;/span&gt;
&lt;span class="nl"&gt;:end&lt;/span&gt;
&lt;span class="kd"&gt;bash&lt;/span&gt; &lt;span class="na"&gt;-ic &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;!CMDLINE!&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, that's it! Now I have a new command in my Windows environment which enables me to run any custom command in my WSL environment right within cmd! How cool is that!&lt;br&gt;
Look at that, works like a charm!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzaqktnnow39ikbuer61j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzaqktnnow39ikbuer61j.png" alt="successfully run the bash alias outside wsl" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;So the second approach was actually a good decision. I saved a lot of time and effort. &lt;/p&gt;

&lt;p&gt;Now this command turbocharges my productivity. &lt;/p&gt;

&lt;p&gt;Also, my default wsl distro is Void linux, not Ubuntu. So the bash shell initialization doesn't take much time and all the commands work really fast with &lt;code&gt;runbash&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;Also the bonus is I don't need to open a separate &lt;code&gt;git-bash&lt;/code&gt; instance or &lt;code&gt;wsl&lt;/code&gt; to just run some bash files. I can write bash on windows and can execute them from the command prompt without opening wsl.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/Fq6gqi9Ubog?si=QxfjwN2T2cVB0Dcp" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d3oux39w3hqc358erpb.png" alt="Running the basic bash script of eldenring from NetworkChuck's bash tutorial" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you found this POST helpful, if this blog added some value to your time and energy, please show some love by giving the article some likes and share it with your friends.&lt;/p&gt;

&lt;p&gt;Feel free to connect with me :)&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Thanks for reading! 🙏🏻 &lt;br&gt; Written with 💚 by &lt;a href="https://dev.to/ddebajyati"&gt;Debajyati Dey&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;&lt;a href="https://github.com/Debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tu7kfqhw7z1yzmng4ah.png" alt="My GitHub" width="40" height="39"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://www.linkedin.com/in/debajyati-dey/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femp5sh8d4fq0g89lqsia.png" alt="My LinkedIn" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://app.daily.dev/debajyatidey" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20akag0pdeq95u76k9e8.png" alt="My Daily.dev" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://peerlist.io/debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flscfsnjdwyhm803f7mlv.png" alt="My Peerlist" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://x.com/ddebajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0265bz6hmdfybuw0a605.png" alt="My Twitter" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Happy Coding 🧑🏽‍💻👩🏽‍💻! Have a nice day ahead! 🚀&lt;/p&gt;

</description>
      <category>microsoft</category>
      <category>productivity</category>
      <category>cli</category>
      <category>bash</category>
    </item>
    <item>
      <title>8 Developer Tools You Should Try in 2024</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Mon, 05 Aug 2024 19:57:33 +0000</pubDate>
      <link>https://dev.to/studio1hq/8-developer-tools-you-should-try-in-2024-b8c</link>
      <guid>https://dev.to/studio1hq/8-developer-tools-you-should-try-in-2024-b8c</guid>
      <description>&lt;p&gt;We, Developers, always try to use tools that can streamline our workflow and boost our productivity.&lt;/p&gt;

&lt;p&gt;I've searched and picked out 8 amazing tools that I think every developer should know about. These tools will help you to be productive and make your work easier as a developer.&lt;/p&gt;

&lt;p&gt;Now, I know what you're thinking - "Another list of tools? Really?" But trust me, this one's different!&lt;/p&gt;

&lt;p&gt;Whether you've been coding for years or just starting out, I'm sure you'll find something in this list that will change how you work. Some of these tools might surprise you – they sure surprised me when I first found them!&lt;/p&gt;

&lt;p&gt;Ready? Let's do this!&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;a href="https://www.webcrumbs.org/" rel="noopener noreferrer"&gt;Webcrumbs&lt;/a&gt; - Frontend AI Copilot
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrvha5rb2wuhih8ymfz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrvha5rb2wuhih8ymfz4.png" alt="Webcrumbs Landing Page" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.webcrumbs.org/" rel="noopener noreferrer"&gt;Webcrumbs&lt;/a&gt; is an open-source plugin builder and plugin ecosystem that's in the making, empowering developers to build web applications more efficiently and consistently.&lt;/p&gt;

&lt;p&gt;It provides a framework to create reusable, standardized, and accessible building blocks for web development, allowing developers to focus on creating unique features without interfering with the rest of their code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.webcrumbs.org/" rel="noopener noreferrer"&gt;Webcrumbs&lt;/a&gt; seamlessly integrates with various web development frameworks to enhance your coding processes and improve overall application quality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1722867436572%2F546bb0a9-9be9-4f63-82ba-1a846a1d930e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1722867436572%2F546bb0a9-9be9-4f63-82ba-1a846a1d930e.png" alt="Webcrumbs Frontend AI" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The latest addition to their suite, &lt;a href="https://www.producthunt.com/posts/frontendai-by-webcrumbs" rel="noopener noreferrer"&gt;Frontend AI&lt;/a&gt;, takes web component development to the next level.&lt;/p&gt;

&lt;p&gt;It's an AI-powered tool that generates custom web components based on text descriptions or image inputs.&lt;/p&gt;

&lt;p&gt;This feature streamlines the process of creating new components, making it faster and more intuitive for developers to build complex interfaces.&lt;/p&gt;

&lt;p&gt;This is very handy for developers of all skill levels, from beginners to experts.&lt;/p&gt;

&lt;p&gt;✅ Generate web components by simply describing what you want or uploading an image.&lt;/p&gt;

&lt;p&gt;✅ Preview generated components in real-time before integrating them into your project.&lt;/p&gt;

&lt;p&gt;✅ Customize components by iterating on your prompts or adjusting AI-generated code.&lt;/p&gt;

&lt;p&gt;✅ No login is required to try out the &lt;a href="https://www.webcrumbs.org/frontend-ai" rel="noopener noreferrer"&gt;Frontend AI&lt;/a&gt; feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1722802173396%2F57e9b334-cd0c-4935-841b-97d3759ce710.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1722802173396%2F57e9b334-cd0c-4935-841b-97d3759ce710.png" alt="Generate UI Components with FrontenedAI" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the most powerful aspects of &lt;a href="https://www.producthunt.com/posts/frontendai-by-webcrumbs" rel="noopener noreferrer"&gt;Frontend AI&lt;/a&gt; is its ability to refine and customize components through additional prompts. You can:&lt;/p&gt;

&lt;p&gt;✅ Iterate on the initial result by adding more specific prompts&lt;/p&gt;

&lt;p&gt;✅ Customize colors to match your brand or design preferences&lt;/p&gt;

&lt;p&gt;✅ Adjust fonts to align with your typography guidelines&lt;/p&gt;

&lt;p&gt;✅ Fine-tune layout and spacing according to your needs&lt;/p&gt;

&lt;p&gt;For example, after generating a basic button component, you could add prompts like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Replace the background with a nice gradient"&lt;/li&gt;
&lt;li&gt;"Add icons"&lt;/li&gt;
&lt;li&gt;Or, my favorite, "Let’s make it weirder"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI will then update the component based on these additional instructions, allowing you to quickly create precisely the component you need.&lt;/p&gt;

&lt;p&gt;They are Live on &lt;a href="https://www.producthunt.com/posts/frontendai-by-webcrumbs" rel="noopener noreferrer"&gt;ProductHunt&lt;/a&gt;, Feel Free to support them here:&lt;a href="https://www.producthunt.com/posts/frontendai-by-webcrumbs" rel="noopener noreferrer"&gt;https://www.producthunt.com/posts/frontendai-by-webcrumbs&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  2. &lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;Pieces.app&lt;/a&gt; - Your Workflow Copilot
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5m85vhmzpx61zsk7w282.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5m85vhmzpx61zsk7w282.png" alt="Pieces Landing page" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;Pieces&lt;/a&gt; is an innovative AI-driven developer tool designed to revolutionize your coding workflow through intelligent snippet management, context-aware copilot interactions, and proactive surfacing of relevant materials.&lt;/p&gt;

&lt;p&gt;It improves your workflow, and your overall development experience while maintaining the privacy and security of your work with a completely offline approach to AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;Pieces&lt;/a&gt; offer a suite of features that enhance productivity, including AI-powered code snippet organization, contextualized copilot interactions, and intelligent surfacing of useful resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z5fr2g1znvniw4szdg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z5fr2g1znvniw4szdg0.png" alt="Screenshot highlighting AI features for developers" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These capabilities ensure that your coding workflow remains efficient, organized, and tailored to your needs.&lt;/p&gt;

&lt;p&gt;You can visit their &lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;website&lt;/a&gt; to download &lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;Pieces&lt;/a&gt; and start experiencing a more streamlined, AI-enhanced coding environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;Pieces&lt;/a&gt; also provides several standout features to boost your development workflow:&lt;/p&gt;

&lt;p&gt;✅ Access to 25+ LLMs with both cloud and on-device models for versatile AI assistance.&lt;/p&gt;

&lt;p&gt;✅ AI-assisted tagging and categorization for efficient code snippet management.&lt;/p&gt;

&lt;p&gt;✅ Complete privacy with offline, on-device AI models to keep your code secure.&lt;/p&gt;

&lt;p&gt;✅ Ability to extract code snippets from screenshots for easy reference.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;Pieces&lt;/a&gt;, we can focus on writing code while the AI assistant handles organization, retrieval, and contextual support.&lt;/p&gt;

&lt;p&gt;This approach significantly reduces cognitive load and improves overall productivity, allowing developers to maintain their flow and produce higher-quality code more efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. &lt;a href="https://www.warp.dev/" rel="noopener noreferrer"&gt;Warp&lt;/a&gt; - AI-powered Terminal
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy61vi2lvenj1wrw1rj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy61vi2lvenj1wrw1rj8.png" alt="A user interface for Warp" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.warp.dev/" rel="noopener noreferrer"&gt;Warp&lt;/a&gt; is an open-source Rust-based terminal. It's blazingly fast, user-friendly, and packed with features that enhance developer productivity.&lt;/p&gt;

&lt;p&gt;Their most popular feature is &lt;a href="https://docs.warp.dev/features/warp-ai/ai-command-suggestions" rel="noopener noreferrer"&gt;AI Command Search&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yy9hlnkly6a8ucjk4ux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yy9hlnkly6a8ucjk4ux.png" alt="Ai Command search" width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you type, &lt;a href="https://www.warp.dev/" rel="noopener noreferrer"&gt;Warp&lt;/a&gt; can suggest commands based on natural language descriptions, making it easier for both beginners and experienced developers to find the right commands quickly.&lt;/p&gt;

&lt;p&gt;As developers, we certainly need this to enhance our daily productivity and streamline our workflow.&lt;/p&gt;

&lt;p&gt;It has a lot of cool features such as:&lt;/p&gt;

&lt;p&gt;✅ AI-powered command search and suggestions.&lt;/p&gt;

&lt;p&gt;✅ Built-in command palette for quick access to actions.&lt;/p&gt;

&lt;p&gt;✅ Smart input mode with syntax highlighting and autocompletion.&lt;/p&gt;

&lt;p&gt;✅ Customizable themes and layouts.&lt;/p&gt;

&lt;p&gt;You can read the Warp documentation at &lt;a href="http://docs.warp.dev" rel="noopener noreferrer"&gt;docs.warp.dev&lt;/a&gt; to get started.&lt;/p&gt;

&lt;p&gt;You can also see the below demo to understand &lt;a href="https://www.warp.dev/blog/how-warp-works" rel="noopener noreferrer"&gt;how it works&lt;/a&gt; :&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/34INSNevPOk"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.warp.dev/" rel="noopener noreferrer"&gt;Warp&lt;/a&gt; has gained significant popularity among developers, with a growing user base and positive reviews.&lt;/p&gt;

&lt;p&gt;It's particularly useful for developers who want a modern, feature-rich terminal experience with AI-powered assistance.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. &lt;a href="https://www.raycast.com/" rel="noopener noreferrer"&gt;Raycast&lt;/a&gt;- Supercharged Productivity Tool
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fit1dlpo6r25r44cjo0v6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fit1dlpo6r25r44cjo0v6.png" alt="Raycast Landing page" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.raycast.com/" rel="noopener noreferrer"&gt;Raycast&lt;/a&gt; is a productivity tool that aims to streamline workflow and boost efficiency for developers and other professionals.&lt;/p&gt;

&lt;p&gt;It's a powerful launcher and command palette for macOS, designed to replace and enhance the functionality of Spotlight.&lt;/p&gt;

&lt;p&gt;Some of &lt;a href="https://www.raycast.com/" rel="noopener noreferrer"&gt;Raycast&lt;/a&gt;'s standout features include:&lt;/p&gt;

&lt;p&gt;✅ Quick application launcher and file search.&lt;/p&gt;

&lt;p&gt;✅ Customizable shortcuts for frequent actions.&lt;/p&gt;

&lt;p&gt;✅ Built-in calculator, unit converter, and other utilities.&lt;/p&gt;

&lt;p&gt;✅ Scriptable extensions in various languages (JavaScript, Swift, AppleScript).&lt;/p&gt;

&lt;p&gt;✅ Integration with popular developer tools and services.&lt;/p&gt;

&lt;p&gt;✅ AI-powered natural language processing for commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.raycast.com/" rel="noopener noreferrer"&gt;Raycast&lt;/a&gt; is currently available for macOS only, catering primarily to Apple ecosystem users.&lt;/p&gt;

&lt;p&gt;However, its impact on productivity has made it popular among developers, designers, and other professionals who rely heavily on their Macs.&lt;/p&gt;

&lt;p&gt;it has gained significant traction in the developer community.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. &lt;a href="https://strapi.io/" rel="noopener noreferrer"&gt;Strapi&lt;/a&gt;- Open Source Headless CMS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse1943bliekmb2ywx7b2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse1943bliekmb2ywx7b2.png" alt="Strapi webpage" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://strapi.io/" rel="noopener noreferrer"&gt;Strapi&lt;/a&gt; is an open-source headless CMS that gives developers the freedom to choose their favorite tools and frameworks while also allowing content editors to easily manage and distribute their content.&lt;/p&gt;

&lt;p&gt;They are revolutionizing the way content is managed and delivered, making it more flexible and developer-friendly, which means reimagining the entire content management experience from the ground up.&lt;/p&gt;

&lt;p&gt;It has a lot of exciting features:&lt;/p&gt;

&lt;p&gt;✅ It offers a customizable admin panel that content managers can use to create, edit, and manage content.&lt;/p&gt;

&lt;p&gt;✅ Strapi provides a powerful API out of the box, allowing developers to fetch content for any front-end application.&lt;/p&gt;

&lt;p&gt;✅ It supports multiple databases including SQLite, PostgreSQL, MySQL, and MongoDB.&lt;/p&gt;

&lt;p&gt;You can explore the &lt;a href="https://docs.strapi.io/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; to see why it's creating such a buzz!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqse8saw7xxnid56jlzna.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqse8saw7xxnid56jlzna.gif" alt="Strapi GIF" width="760" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It also allows you to manage content types, user roles, and permissions efficiently, ensuring your applications are flexible and scalable.&lt;/p&gt;

&lt;p&gt;This is very handy for developers working on both small projects and large-scale enterprise applications.&lt;/p&gt;

&lt;p&gt;✅ Create and manage custom content types.&lt;/p&gt;

&lt;p&gt;✅ Define relationships between content types.&lt;/p&gt;

&lt;p&gt;✅ Set up user roles and permissions.&lt;/p&gt;

&lt;p&gt;✅ Use plugins to extend functionality.&lt;/p&gt;

&lt;p&gt;Strapi offers SDKs and integrations for various technologies including JavaScript, React, Vue, Angular, and more.&lt;/p&gt;

&lt;p&gt;They have 62.8k+ stars on GitHub and 380+ releases, so they are constantly evolving and improving.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. &lt;a href="https://www.gitpod.io/" rel="noopener noreferrer"&gt;Gitpod&lt;/a&gt;- Cloud-based IDE
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ncfuomnd3bb33vb0n7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ncfuomnd3bb33vb0n7f.png" alt="Gitpod landing page" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitpod.io/" rel="noopener noreferrer"&gt;Gitpod&lt;/a&gt; is a cloud-based integrated development environment (IDE) that enables developers to quickly spin up fresh, automated dev environments for their projects directly from their Git repositories.&lt;/p&gt;

&lt;p&gt;It's revolutionizing the way developers work by providing instant, ready-to-code workspaces in the browser, eliminating the need for local setup and configuration.&lt;/p&gt;

&lt;p&gt;Key features of &lt;a href="https://www.gitpod.io/" rel="noopener noreferrer"&gt;Gitpod&lt;/a&gt; include:&lt;/p&gt;

&lt;p&gt;✅ Instant, disposable dev environments that can be launched from any Git repository.&lt;/p&gt;

&lt;p&gt;✅ Pre-configured workspaces using &lt;code&gt;.gitpod.yml&lt;/code&gt; files for automated setup.&lt;/p&gt;

&lt;p&gt;✅ Integration with popular version control platforms like GitHub, GitLab, and Bitbucket.&lt;/p&gt;

&lt;p&gt;✅ Collaborative coding with features like shared workspaces and live pair programming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitpod.io/" rel="noopener noreferrer"&gt;Gitpod&lt;/a&gt; provides a powerful, VS Code-based IDE in the browser, complete with extensions, terminal access, and debugging capabilities.&lt;/p&gt;

&lt;p&gt;This allows developers to work on their projects from anywhere, on any device.&lt;/p&gt;

&lt;p&gt;They have 12.6k+ stars on &lt;a href="https://github.com/gitpod-io/gitpod" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and are constantly evolving their platform to improve the developer experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. &lt;a href="https://replexica.com/" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; - AI-powered i18n toolkit for React
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tlqs3joqhds810b21me.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tlqs3joqhds810b21me.png" alt="Replexica landing page" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This open-source project is widely popular, but many developers still don't know about it. &lt;a href="https://github.com/replexica/replexica" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; can help you build multilingual user interfaces 100x faster, with AI-powered localization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://replexica.com/" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; is designed for dev teams who want to ship multilingual products without the hassle. It's not just another translation tool; it's a complete localization platform that integrates seamlessly with your development workflow.&lt;/p&gt;

&lt;p&gt;The toolkit consists of two main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Replexica CLI: An open-source command-line tool for managing translations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replexica API: A cloud-based i18n API that leverages Large Language Models (LLMs) for content processing.A cloud-based AI localization engine powered by OpenAI, Anthropic, and Mistral.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://replexica.com/" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; can help with five key aspects of your product:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Landing pages: Localize your static content with ease&lt;/li&gt;
&lt;li&gt;Blog content: Support for Markdown and frontmatter out of the box&lt;/li&gt;
&lt;li&gt;User interfaces: Seamless integration with popular i18n frameworks&lt;/li&gt;
&lt;li&gt;Product emails: Keep your communications multilingual&lt;/li&gt;
&lt;li&gt;Real-time API: Perfect for multilingual chats, next-gen email clients, and comment systems&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can get started quickly with Replexica using npm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;// &lt;span class="nb"&gt;install
&lt;/span&gt;pnpm add replexica @replexica/sdk


// login to Replexica API.
pnpm replexica auth &lt;span class="nt"&gt;--login&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/replexica/replexica" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; supports various i18n formats, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;JSON-free Replexica compiler format&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;.md files for Markdown content&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Legacy JSON and YAML-based formats&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a quick example of using the &lt;a href="https://github.com/replexica/replexica" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; SDK for real-time localization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ReplexicaEngine&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@replexica/sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;replexica&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReplexicaEngine&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your-api-key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;localizedContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;replexica&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;localize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;greeting&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello, world!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;sourceLocale&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;en&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;targetLocale&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fr&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;localizedContent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// { greeting: 'Bonjour, le monde!' }&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://replexica.com/" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; supports 42 languages out of the box, so you're not limited to just a few options. Whether you need Arabic, Zulu, or anything in between, Replexica has you covered.&lt;/p&gt;

&lt;p&gt;As a GitHub Technology Partner, Replexica provides easy integrations with GitHub. All localization can happen in your CI/CD pipeline. Here's a simple GitHub Actions example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replexica/replexica@latest&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;api-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.REPLEXICA_API_KEY }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://replexica.com/" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; is built on enterprise-grade infrastructure, leveraging Cloudflare for global distribution and reliability. This means you get fast, secure, and scalable localization, no matter where your users are.&lt;/p&gt;

&lt;p&gt;The config is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"locale"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"targets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"es"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"fr"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"de"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ja"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"buckets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"locales/[locale].json"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"content/[locale]/blog"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"markdown"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup tells Replexica to translate from English to Spanish, French, German, and Japanese, handling both JSON files for UI strings and Markdown files for blog content.&lt;/p&gt;

&lt;p&gt;By leveraging AI and integrating deeply with development workflows, Replexica allows teams to focus on building great products, not managing translations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/replexica/replexica" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; has also gained significant traction in the developer community, boasting 1k+ stars on GitHub.&lt;/p&gt;

&lt;p&gt;If you're tired of the traditional, slow, and error-prone localization process, give &lt;a href="https://replexica.com/" rel="noopener noreferrer"&gt;Replexica&lt;/a&gt; a try. It's not just faster; it's smarter. And in the fast-paced world of software development, that makes all the difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. &lt;a href="https://mintlify.com/" rel="noopener noreferrer"&gt;Mintlify&lt;/a&gt; - Documentation for Developers
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo69vt2luc2wqbxbrjzrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo69vt2luc2wqbxbrjzrv.png" alt="Mintlify webpage" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mintlify.com/" rel="noopener noreferrer"&gt;Mintlify&lt;/a&gt; is a documentation platform that simplifies the process of creating and maintaining beautiful, user-friendly documentation for software projects.&lt;/p&gt;

&lt;p&gt;They are making high-quality documentation accessible to everyone, not just technical writers, and that means reimagining the entire documentation experience from the ground up.&lt;/p&gt;

&lt;p&gt;Mintlify provides a set of tools to enhance documentation, such as automatic API reference generation, versioning, and seamless integration with existing codebases. These features ensure that your documentation stays up-to-date, comprehensive, and easily navigable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87809cplm93sjjrssr1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87809cplm93sjjrssr1d.png" alt="Image shows the user interface of PearsDB, an AI Automation platform for building AI/ML powered features and applications." width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can read the &lt;a href="https://mintlify.com/docs" rel="noopener noreferrer"&gt;docs&lt;/a&gt; and check on how to install and configure Mintlify, which is the best way to get started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mintlify.com/" rel="noopener noreferrer"&gt;Mintlify&lt;/a&gt; also offers additional features to improve the documentation experience:&lt;/p&gt;

&lt;p&gt;✅ Customizable themes and layouts to match your brand identity.&lt;/p&gt;

&lt;p&gt;✅ Markdown and MDX support for flexible content creation.&lt;/p&gt;

&lt;p&gt;✅ Built-in search functionality for easy navigation.&lt;/p&gt;

&lt;p&gt;✅ Analytics to track documentation usage and identify areas for improvement.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://mintlify.com/" rel="noopener noreferrer"&gt;Mintlify&lt;/a&gt;, developers can focus on writing great content while the platform takes care of the presentation and organization.&lt;/p&gt;

&lt;p&gt;This approach significantly reduces the time and effort required to maintain high-quality documentation, ultimately improving the overall developer experience for your project's users.&lt;/p&gt;




&lt;p&gt;I've tried to cover a wide range of tools. if you know other awesome tools, write them in the comments!&lt;/p&gt;

&lt;p&gt;Hope you found this article useful. If so, feel free to share it with your developer friends!&lt;/p&gt;

&lt;p&gt;For Paid collaboration mail me at: &lt;a href="mailto:arindammajumder2020@gmail.com"&gt;arindammajumder2020@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect with me on &lt;a href="https://twitter.com/intent/follow?screen_name=Arindam_1729" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/arindam2004/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://www.youtube.com/channel/@Arindam_1729" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt; and &lt;a href="https://github.com/Arindam200" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you for Reading : )&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sr5lktqpn46ztz5p5ur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sr5lktqpn46ztz5p5ur.png" alt="Thank You " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
