<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergey Bolshchikov</title>
    <description>The latest articles on DEV Community by Sergey Bolshchikov (@bolshchikov).</description>
    <link>https://dev.to/bolshchikov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bolshchikov"/>
    <language>en</language>
    <item>
      <title>Open Deep Research Internals: A Step-by-Step Architecture Guide</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Sun, 02 Nov 2025 09:49:38 +0000</pubDate>
      <link>https://dev.to/bolshchikov/open-deep-research-internals-a-step-by-step-architecture-guide-2ibk</link>
      <guid>https://dev.to/bolshchikov/open-deep-research-internals-a-step-by-step-architecture-guide-2ibk</guid>
      <description>&lt;p&gt;This blog post is a different kind. It's a deep dive explanation of how Open Deep Research works under the hood and which design patterns are applied to make it one of the best open source deep research agents.&lt;/p&gt;

&lt;p&gt;How is this post different from other resources?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The LangChain GitHub repo and blog posts provide only high-level explanations of how things work.&lt;/li&gt;
&lt;li&gt;LangSmith and LangGraph Studio don't expose all the details—it's difficult to capture the state at each step and the dynamic invocation of the graph. Therefore, it's hard to understand the full picture at each step of execution.&lt;/li&gt;
&lt;li&gt;A solid grasp of reflection agents, tool-use design patterns, and basic recursion is required to fully understand the process.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a long post by design, so brace yourself. We'll start by aligning on the high-level design of Open Deep Research. We'll then cover several design patterns used in the implementation that are crucial for understanding. Finally, we'll take a step-by-step deep dive through an example to see how the Open Deep Research graph and state evolve at each step.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It's Designed
&lt;/h2&gt;

&lt;p&gt;I assume you've read the official LangChain blog post that explains how Open Deep Research is built. However, to ensure we're all on the same page, let's have a quick look at the architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn0pqo0ibzr93gsa1vax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgn0pqo0ibzr93gsa1vax.png" alt="Open Deep Research High Level Design" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agent conceptually consists of three main parts: scoping, research, and final report. Any design variation you might want to build will likely contain these three components.&lt;/p&gt;

&lt;p&gt;The goal of the scoping part is to build the input for the research phase. In Open Deep Research, this consists of a user clarification loop—the LLM determines whether it requires any clarification from the user or not—and then brief generation.&lt;/p&gt;

&lt;p&gt;If you design your own agent, this is where you can perform user prompt optimizations or apply other techniques to improve the quality of user input.&lt;/p&gt;

&lt;p&gt;When the brief is generated, it proceeds to the research phase. This is where the heavy work happens. We'll go into much more detail later, but in a nutshell, it consists of two stages: supervisor and research sub-agents. The supervisor, given the brief and using reflection, spawns multiple research sub-agents on demand, each with a dedicated sub-task. Each sub-agent (sub-graph) receives a dedicated topic, performs research on it, and returns a summary to the supervisor. When the supervisor reflects on the results and decides that it has gathered enough data, everything moves to the Reporter part.&lt;/p&gt;

&lt;p&gt;The Reporter takes all the collected information and generates the final result. If your result is too large, this is where you can generate an artifact (like in Claude) instead of text results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prior Knowledge
&lt;/h2&gt;

&lt;p&gt;To understand how Open Deep Research really works, we need to discuss several patterns in isolation that are used throughout. It's not the classic ReACT pattern that's now standard.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reflection Pattern
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flie8mkz63bb3ysjb17j8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flie8mkz63bb3ysjb17j8.png" alt="AI Agent reflection design pattern" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reflection pattern enables agents to evaluate their own output and use that feedback to refine their responses iteratively. In this pattern, an LLM generates an initial response, then acts as its own critic to assess the quality of that output. Based on this self-critique, the agent produces an improved version, repeating this cycle until it meets quality standards or reaches a stopping condition. This self-correction loop allows agents to avoid getting stuck in purely reactive thinking patterns and move toward more deliberate, methodical problem-solving.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tool Use Pattern
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figsc9zq48wo46m54hf49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figsc9zq48wo46m54hf49.png" alt="AI Agent tool use design pattern" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Tool Use pattern is fundamental to understanding Open Deep Research's architecture. While it might seem similar to standard tool calling, there are crucial differences that enable more sophisticated agent behaviors.&lt;/p&gt;

&lt;h4&gt;
  
  
  Standard Tool Calling (ReACT)
&lt;/h4&gt;

&lt;p&gt;In classical ReACT architecture, tool calling is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_web&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Search the web for information.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;tavily_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# LangGraph automatically:
# 1. Detects the tool call in LLM response
# 2. Executes the function
# 3. Adds result to message history
# 4. Continues to next LLM call
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works great for simple tools, but has limitations when dealing with complex operations.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Manual Tool Use Pattern
&lt;/h4&gt;

&lt;p&gt;Open Deep Research uses a different approach—&lt;strong&gt;manual tool orchestration&lt;/strong&gt;. Here's why and how:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is this needed?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Complex operations&lt;/strong&gt;: When a "tool" is actually spawning an entire subgraph (like a research sub-agent), you need more control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory management&lt;/strong&gt;: Tool results can be massive (e.g., full research reports). Adding everything to message history bloats the context window&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom routing logic&lt;/strong&gt;: You might need to handle tool execution differently based on business logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallel execution&lt;/strong&gt;: Spawning multiple sub-agents simultaneously requires manual coordination&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Define tool schemas without implementations:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ConductResearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Tool definition for spawning a research sub-agent.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The research topic to investigate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Note: This is just a schema - no actual function implementation
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Bind schemas to the LLM:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;llm_with_tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind_tools&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;ConductResearch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ThinkTool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ResearchComplete&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;LLM returns structured tool calls:&lt;/strong&gt;
When the LLM decides to "use" a tool, it returns:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nc"&gt;AIMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ll research this topic now&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tool_calls&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ConductResearch&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;args&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;topic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;machine learning frameworks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;call_abc123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;You manually handle the tool execution:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Check if LLM wants to call a tool
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool_calls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;tool_call&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool_calls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ConductResearch&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Spawn a research sub-agent (a whole subgraph!)
&lt;/span&gt;            &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;research_subgraph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ainvoke&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;research_topic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;args&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;topic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;})&lt;/span&gt;

            &lt;span class="c1"&gt;# Return a compact confirmation, not the full result
&lt;/span&gt;            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;ToolMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Research completed on &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;args&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;topic&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;tool_call_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Benefits in Open Deep Research:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Subgraph invocation&lt;/strong&gt;: The supervisor can spawn entire research sub-agents as "tools"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context efficiency&lt;/strong&gt;: Instead of adding a 10,000-token research report to the message history, you return a simple "Research completed" message&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible routing&lt;/strong&gt;: You can route different tool calls to different subgraphs or handlers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallel coordination&lt;/strong&gt;: Spawn multiple research sub-agents simultaneously, each with isolated context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Trade-off:&lt;/strong&gt;&lt;br&gt;
You lose automatic tool execution but gain fine-grained control over when, how, and what gets executed. This control is essential for sophisticated multi-agent architectures like Open Deep Research, where "tools" are actually entire reasoning subgraphs with their own state management.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deep Step-by-Step Guide
&lt;/h2&gt;

&lt;p&gt;Now that we have all the necessary knowledge and a high-level understanding of how Open Deep Research is designed, it's time for a deep dive.&lt;/p&gt;

&lt;p&gt;The best way to describe it is to walk through an example step-by-step and see how the overall graph and state of the deep research agent change at each step.&lt;/p&gt;

&lt;p&gt;Each step contains a corresponding image showing how the graph looks and the LangGraph state object (in green) at that step.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: User Question
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31qgouug18l23u6waimf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31qgouug18l23u6waimf.png" alt="Step 1: User Question" width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It all starts with the user question. The LLM decides, using structured output, whether it requires any clarification from the user. If so, it returns the corresponding boolean value and a follow-up question that is sent to the user. The state at this stage is simple: an array of messages.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: User Responds to a Clarifying Question
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bdfamsl400z5bjg159e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bdfamsl400z5bjg159e.png" alt="Step 2: User Responds to a Clarifying Question" width="800" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a user replies, Open Deep Research performs another call to the LLM with the array of messages. It might ask for clarification again or return structured output with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;need_verification&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Thank you…&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where &lt;code&gt;need_verification&lt;/code&gt; indicates whether we should proceed to the brief writer or not.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;clarification_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ainvoke&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nc"&gt;HumanMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;prompt_content&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;need_clarification&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# End with clarifying question for user
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;goto&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;END&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;AIMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)]}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Proceed to research with verification message
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;goto&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;write_research_brief&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;AIMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;verification&lt;/span&gt;&lt;span class="p"&gt;)]}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Generate the Brief and Pass to Supervisor
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j9bv6oe3nf1mt7vr9bp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j9bv6oe3nf1mt7vr9bp.png" alt="Step 3: Generate the Brief and Pass to Supervisor" width="800" height="893"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we make another call to the LLM with a different prompt to generate the research brief. The result we get back is stored in the state.&lt;/p&gt;

&lt;p&gt;We also prepare the initial state for the supervisor subgraph. It's stored in &lt;code&gt;supervisor_messages&lt;/code&gt; inside the state and starts with the system prompt and the brief that was just generated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Supervisor Reflects on the Brief
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ow9526dhuhssp2rmbp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ow9526dhuhssp2rmbp3.png" alt="Step 4: Supervisor Reflects on the Brief" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the complex part begins. The supervisor has its own system prompt and three tool definitions: &lt;code&gt;think_tool&lt;/code&gt;, &lt;code&gt;conduct_research&lt;/code&gt;, and &lt;code&gt;research_complete&lt;/code&gt;. It's important to point out the pattern we described earlier—these are just definitions without actual implementations. The supervisor node listens to LLM calls and performs tool calls itself.&lt;/p&gt;

&lt;p&gt;It starts with a call to the &lt;code&gt;think_tool&lt;/code&gt; (reflection pattern) to understand what it should do, and the result is stored in &lt;code&gt;supervisor_messages&lt;/code&gt;. The supervisor also checks the number of calls (&lt;code&gt;research_iterations&lt;/code&gt;) it has made and stops if it exceeds the predefined maximum; otherwise, the research could continue indefinitely.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Supervisor Initiates the Research
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpc1wkyjmflc0alqms9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpc1wkyjmflc0alqms9p.png" alt="Step 5: Supervisor Initiates the Research" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the first reflection is recorded, the supervisor makes a call to the LLM, which returns a call to the &lt;code&gt;conduct_research&lt;/code&gt; tool with the topic. It might return several &lt;code&gt;conduct_research&lt;/code&gt; tools; in that case, the supervisor will spawn multiple sub-agents in parallel, each with a dedicated topic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Initiate Research Sub-Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrlhr6gumx7z1hk16ci2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrlhr6gumx7z1hk16ci2.png" alt="Step 6: Initiate Research Sub-Agent" width="800" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The research sub-agent is a subgraph that is invoked dynamically based on the LLM response from the &lt;code&gt;conduct_research&lt;/code&gt; tool; therefore, it's not visible in LangGraph Studio. It consists of three main nodes: &lt;code&gt;researcher&lt;/code&gt;, &lt;code&gt;research_tools&lt;/code&gt;, and &lt;code&gt;compress_research&lt;/code&gt;. Research tools can be configured but generally include &lt;code&gt;think_tool&lt;/code&gt; with the same logic as the supervisor tool, search, MCP servers, and the &lt;code&gt;research_complete&lt;/code&gt; tool. As a subgraph, it has its own state with messages and &lt;code&gt;research_topic&lt;/code&gt; that it received from the supervisor. Similar to the supervisor, each tool call is tracked in the state and capped by the maximum number of iterations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Research Agent Initiates the Search
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwdbqh8z5t7ithvnxbfq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwdbqh8z5t7ithvnxbfq.png" alt="Step 7: Research Agent Initiates the Search" width="800" height="658"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This step is straightforward. The research node makes an LLM call with the system prompt and the research topic it received from the supervisor. It receives back a tool call to &lt;code&gt;web_search&lt;/code&gt; with an array of queries that it should perform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 8: Research Agent Performs Multiple Searches
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pj1lg0h0xf10801vyit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pj1lg0h0xf10801vyit.png" alt="Step 8: Research Agent Performs Multiple Searches" width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open Deep Research uses Tavily search and spawns several searches in parallel, one for each query it received from the researcher node. Given that search results can be large, before the search tool returns the results, they are summarized and then stored in &lt;code&gt;researchers_messages&lt;/code&gt; in the state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 9: Reflect on Research Results
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u8vdqtbrhxf4xgi2feu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u8vdqtbrhxf4xgi2feu.png" alt="Step 9: Reflect on Research Results" width="800" height="657"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similar to the supervisor, the research node invokes the &lt;code&gt;think_tool&lt;/code&gt; to reflect on received search results and make a decision about whether the results are sufficient or whether to continue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10: Research Completed
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wmjvcphobtsbq9iihng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wmjvcphobtsbq9iihng.png" alt="Step 10: Research Completed" width="800" height="659"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the result of the &lt;code&gt;think_tool&lt;/code&gt; indicates that the research has sufficient information, the research node calls the &lt;code&gt;research_complete&lt;/code&gt; tool and goes to the &lt;code&gt;compress_research&lt;/code&gt; node. Since the result of the search can be large, we compress the results at this point and return them back to the supervisor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 11: Re-evaluate Research Plan
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4utwio5opnqtpauoexnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4utwio5opnqtpauoexnt.png" alt="Step 11: Re-evaluate Research Plan" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When results from research sub-agents are received, it's like finishing a tool call. We store the results in the state and specify to the LLM that &lt;code&gt;conduct_research&lt;/code&gt; has finished. The supervisor, as per its system prompt, calls the &lt;code&gt;think_tool&lt;/code&gt; to understand what to do next.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 12: Research Completed
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhb4r7b40cl057ua8sxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhb4r7b40cl057ua8sxq.png" alt="Step 12: Research Completed" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this stage, reflection indicates that the supervisor has gathered sufficient information for the research the user requested, and the &lt;code&gt;research_complete&lt;/code&gt; tool is called to finish executing this subgraph.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 13: Generate Final Report
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhovax1int4vj6jkrlw0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhovax1int4vj6jkrlw0t.png" alt="Step 13: Generate Final Report" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When all subgraphs are complete, the current state contains the message history, &lt;code&gt;supervisor_messages&lt;/code&gt;, the brief, and the research results. The last step is to make a call to the LLM to generate the final report and return it to the user.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.philschmid.de/agentic-pattern#reflection-pattern" rel="noopener noreferrer"&gt;https://www.philschmid.de/agentic-pattern#reflection-pattern&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.philschmid.de/agentic-pattern#tool-use-pattern" rel="noopener noreferrer"&gt;https://www.philschmid.de/agentic-pattern#tool-use-pattern&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.langchain.com/reflection-agents/" rel="noopener noreferrer"&gt;https://blog.langchain.com/reflection-agents/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/langchain-ai/open_deep_research" rel="noopener noreferrer"&gt;https://github.com/langchain-ai/open_deep_research&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.langchain.com/open-deep-research/" rel="noopener noreferrer"&gt;https://blog.langchain.com/open-deep-research/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>learning</category>
    </item>
    <item>
      <title>How to Make OpenAI API to Return JSON</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Wed, 15 Nov 2023 18:02:06 +0000</pubDate>
      <link>https://dev.to/bolshchikov/how-to-make-openai-api-to-return-json-1hpi</link>
      <guid>https://dev.to/bolshchikov/how-to-make-openai-api-to-return-json-1hpi</guid>
      <description>&lt;p&gt;During OpenAI's dev day, one of the &lt;a href="https://openai.com/blog/function-calling-and-other-api-updates" rel="noopener noreferrer"&gt;major announcements&lt;/a&gt; was the ability to receive a JSON from the chat completion API. However, there aren't a few clear examples of how to do this as most examples focus on function calls.&lt;/p&gt;

&lt;p&gt;Our objective is straightforward: given a query, we want to receive an answer in JSON format.&lt;/p&gt;

&lt;p&gt;How can we achieve this?&lt;/p&gt;

&lt;p&gt;There are three crucial steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modify your prompt
&lt;/h3&gt;

&lt;p&gt;Your prompt must explicitly specify that the response should be in JSON format and you need to define the structure of the JSON object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Given this work history, what's the overall years of experience?
Work history includes start and end date (or present), title, and company name.
If the end date is "Present", use the current date. Today is November 2023.
Return the answer in JSON format with the field "experienceInMonths"
and value as a number.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pay attention to the last sentence of the prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pass response_format
&lt;/h3&gt;

&lt;p&gt;When calling the API, specify the response_format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;openAI&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gpt-3.5-turbo-1106&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;top_p&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;frequency_penalty&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;presence_penalty&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;response_format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;json_object&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// specify the format&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;system&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;systemPrompt&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;workHistory&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's crucial to modify the prompt as well. Just changing the response type to JSON might result in a JSON of an arbitrary structure.&lt;/p&gt;

&lt;p&gt;See this comment from the OpenAI API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gs"&gt;**Important:**&lt;/span&gt; when using JSON mode, you &lt;span class="gs"&gt;**must**&lt;/span&gt; also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and the appearance of a "stuck" request.
Also note that the message content may be partially cut off if
&lt;span class="sb"&gt;`finish_reason="length"`&lt;/span&gt;, which indicates the generation exceeded &lt;span class="sb"&gt;`max_tokens`&lt;/span&gt;
or the conversation exceeded the max context length.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Parse JSON response
&lt;/h3&gt;

&lt;p&gt;Once we receive the response, the content is still text (string type), but we can now parse it as JSON.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;choices[0].message.content&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;experienceInMonths&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Calculating total experience for a member did not work&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's good practice to wrap JSON.parse in a try...catch statement in case we receive an invalid JSON structure.&lt;/p&gt;

&lt;p&gt;You can find a &lt;a href="https://platform.openai.com/playground/p/JP5k7UGrVtfbZAk8RyIkgEgi?mode=chat" rel="noopener noreferrer"&gt;playground example here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>openai</category>
      <category>ai</category>
      <category>javascript</category>
      <category>howto</category>
    </item>
    <item>
      <title>🧹🧹 Sanitizing user input with OpenAI under $1</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Sat, 04 Nov 2023 08:03:38 +0000</pubDate>
      <link>https://dev.to/bolshchikov/sanitizing-any-user-input-with-openai-under-1-3ief</link>
      <guid>https://dev.to/bolshchikov/sanitizing-any-user-input-with-openai-under-1-3ief</guid>
      <description>&lt;p&gt;The objective of this task is to extract a person's name correctly as it appears on LinkedIn. For example, if the input is &lt;code&gt;John, Smith&lt;/code&gt;, the desired output should be &lt;code&gt;John,Smith&lt;/code&gt;. Here's a slightly more complex example: if the input is &lt;code&gt;🌯 John,Smith&lt;/code&gt;, the output should be &lt;code&gt;John,Smith&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Simplistic Solution
&lt;/h3&gt;

&lt;p&gt;The most straightforward solution in this scenario is to utilize a library that effectively removes unwanted characters from the input. The npm package known as &lt;a href="https://github.com/fazlulkarimweb/string-sanitizer" rel="noopener noreferrer"&gt;String-sanitizer&lt;/a&gt; is adept at performing this task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sanitize&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string-sanitizer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;names&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt; John,Smith &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;🌯John,Smith&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John,Smith ✔️&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John,Smith  🇺🇦&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="nx"&gt;names&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c1"&gt;// split because sanitize removes ','&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(([&lt;/span&gt;&lt;span class="nx"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nf"&gt;sanitize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nf"&gt;sanitize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;)]))&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c1"&gt;// ["John,Smith", "John,Smith", "John,Smith", "John,Smith"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initially, this solution may appear effective until one encounters less predictable instances of names.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dealing with Unpredictable Input
&lt;/h3&gt;

&lt;p&gt;It is important to note that LinkedIn users often get creative with their naming conventions. The following examples illustrate how this variation can disrupt the efficiency of the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sanitize&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string-sanitizer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;names&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John (Johnny),Smith&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Joghn,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Smith, CPA&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John,Smith-Perry&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John,Smith Jr.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Smith, Ph.D&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John,Smith ✰ I'm Hiring ✰&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="nx"&gt;names&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(([&lt;/span&gt;&lt;span class="nx"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nf"&gt;sanitize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;firstName&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nf"&gt;sanitize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lastName&lt;/span&gt;&lt;span class="p"&gt;)]))&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="c1"&gt;// ["JohnJohnny,Smith", "Joghn,Smith", "John,SmithPerry", "John,SmithJr", "John,Smith", "John,SmithImHiring"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One might attempt to address each of these instances by crafting complex regex. However, this approach presents two major challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It's plausible that there will always be a use case where the code will return an incorrect result.&lt;/li&gt;
&lt;li&gt;The maintenance cost of the code, which includes testing and improving, is quite high. For each new use case encountered, the function must be altered to accommodate it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Probabilistic Approach
&lt;/h3&gt;

&lt;p&gt;For this use case, probabilistic models can yield significantly superior results than any possible code written by developers.&lt;/p&gt;

&lt;p&gt;An example of this is using the OpenAI API with a simple prompt to return the person’s name. This method excels in more complex use cases, such as "I am hiring". However, it may overlook various suffixes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wkpas5c6ojtnlz31kbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wkpas5c6ojtnlz31kbk.png" alt="Chat GPT untrained answers" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenAI provides a mechanism to refine results according to specific needs via &lt;a href="https://platform.openai.com/docs/guides/fine-tuning" rel="noopener noreferrer"&gt;fine-tuning the model&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Fine-tuning the model has three key advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The model is trained based on specific data, which yields more precise results.&lt;/li&gt;
&lt;li&gt;It is cost-effective due to the use of a pre-trained model, necessitating a smaller system prompt.&lt;/li&gt;
&lt;li&gt;The process is expedited.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Although it may seem excessive to employ machine learning for a seemingly simple task, this task is not as simple as it appears. Solutions like OpenAI offer fine-tuning capabilities that are easy to implement, providing superior results in less time than traditional coding approaches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fine-Tuning the Model
&lt;/h3&gt;

&lt;p&gt;Here's how we can prepare a fine-tuned model in three steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prepare training and validation datasets.&lt;/li&gt;
&lt;li&gt;Train the model.&lt;/li&gt;
&lt;li&gt;Implement the pre-trained model in the code.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Training a model, in simplest terms, involves supplying the OpenAI model with a file containing examples that include user input and the correct answer that the model should return.&lt;/p&gt;

&lt;p&gt;OpenAI recommends providing 10-100 such examples.&lt;/p&gt;

&lt;p&gt;The training file in &lt;code&gt;.jsonl&lt;/code&gt; format might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"messages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A given phrase contains a name. Your task is to extract it."&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Smith 🇮🇱"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"assistant"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Smith"&lt;/span&gt;&lt;span class="p"&gt;}]}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"messages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A given phrase contains a name. Your task is to extract it."&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John גיון סמיט"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"assistant"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John"&lt;/span&gt;&lt;span class="p"&gt;}]}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"messages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A given phrase contains a name. Your task is to extract it."&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John-Perry"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"assistant"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John-Perry"&lt;/span&gt;&lt;span class="p"&gt;}]}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"messages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A given phrase contains a name. Your task is to extract it."&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John Smith"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"assistant"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John Smith"&lt;/span&gt;&lt;span class="p"&gt;}]}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each line is a separate example. It contains a &lt;code&gt;system&lt;/code&gt; prompt (what you want the model to do), &lt;code&gt;user&lt;/code&gt; input, and &lt;code&gt;assistant&lt;/code&gt; output (the correct answer that the model should provide).&lt;/p&gt;

&lt;p&gt;Upon preparing this file, it can be uploaded to OpenAI's fine-tuning UI to pre-train the model. If your data preparation process is more complex, the OpenAI SDK can also be used for fine-tuning models.&lt;/p&gt;

&lt;p&gt;The training duration will depend on the number of examples provided. Once complete, the model can be integrated into your code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;lodash&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;sanitizeName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;someName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ft:gpt-3.5-turbo-0613:personal::some-weird-code&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// fine-tuned model&lt;/span&gt;
      &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;top_p&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;frequency_penalty&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;presence_penalty&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;system&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;systemPrompt&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;someName&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;choices[0].message.content&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Evaluating the Cost
&lt;/h3&gt;

&lt;p&gt;Finally, let us consider the cost. If you choose to use OpenAI, there may be costs involved, but this should not be a deterrent.&lt;/p&gt;

&lt;p&gt;Instead, it would be beneficial to assess the efficiency perspective. This approach considers how much time you have expended to achieve the most optimal results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Simplistic Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The efficiency of the simplistic approach is rather low. For instance, you might spend approximately four hours writing and testing a function that covers all known cases. The challenge with this approach is the inability to predict all possible scenarios, resulting in a high likelihood of errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fine-Tuning Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The cost of the fine-tuning approach consists of three components:&lt;/p&gt;

&lt;p&gt;Your time to prepare training data + Cost to train the model + Cost to use it.&lt;/p&gt;

&lt;p&gt;Although preparing the training data is the most time-consuming part, it would likely take less than half the time spent writing code using the simplistic approach.&lt;/p&gt;

&lt;p&gt;Fine-tuning the model with OpenAI is a cost-effective solution. For instance, it took about 15 minutes to train a model with 151 examples at a cost of $0.13.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgso8y6c2o2rilp1z4fu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgso8y6c2o2rilp1z4fu9.png" alt="Price spent on fine-tuning" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final component is the cost of usage, which is also not substantial.&lt;/p&gt;

&lt;p&gt;However, the fundamental question is whether the benefits outweigh the costs. Can you truly obtain better results for unpredictable input? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkhaiszb538y5ub4jf0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkhaiszb538y5ub4jf0a.png" alt="Final results of fine-tuned model" width="800" height="596"&gt;&lt;/a&gt;&lt;br&gt;
Consider this: fine-tune approach works not only with known scenarios but also with names that include mixed languages or are entirely in different languages.&lt;/p&gt;

&lt;h6&gt;
  
  
  Photo by &lt;a href="https://unsplash.com/@zhenhappy?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;PAN XIAOZHEN&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/seven-white-push-mops-on-wall-pj-BrFZ9eAA?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;
&lt;/h6&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>javascript</category>
      <category>openai</category>
    </item>
    <item>
      <title>How to maximize your product wins</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Sat, 28 Oct 2023 18:20:01 +0000</pubDate>
      <link>https://dev.to/bolshchikov/how-to-maximize-your-product-wins-3i71</link>
      <guid>https://dev.to/bolshchikov/how-to-maximize-your-product-wins-3i71</guid>
      <description>&lt;p&gt;Throughout my professional career, I've built many products for different companies. Each of them had a unique way of doing it. &lt;/p&gt;

&lt;p&gt;I started to cultivate the feeling of what's THE right way to build the product and deliver value to customers consistently and sustainably. Those two criteria are important and have a significant impact on how to build a product. &lt;/p&gt;

&lt;p&gt;I couldn't verbalize it myself but &lt;a href="https://itamargilad.com/" rel="noopener noreferrer"&gt;Itamar Gilad&lt;/a&gt; did and he did it very well - &lt;a href="https://itamargilad.com/velocity-vs-impact/" rel="noopener noreferrer"&gt;Stop Obsessing Over Development Velocity, Focus on This Instead&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>product</category>
      <category>velocity</category>
      <category>discuss</category>
    </item>
    <item>
      <title>😱 Sane front-end project architecture based on 12 years of experience</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Wed, 27 Sep 2023 17:33:38 +0000</pubDate>
      <link>https://dev.to/bolshchikov/sane-front-end-project-architecture-based-on-12-years-of-experience-5051</link>
      <guid>https://dev.to/bolshchikov/sane-front-end-project-architecture-based-on-12-years-of-experience-5051</guid>
      <description>&lt;p&gt;With 12 years of professional experience in front-end development, I have used 8 different frameworks in production. By any standards, that's a lot. While at the beginning of my career it was fun and exciting to learn something new and discover different flavors and concepts from various frameworks, now I am seeking a bulletproof solution that simply works.&lt;/p&gt;

&lt;p&gt;The ideal architecture has a low cognitive load, is easy to maintain, scalable with the application's growth, and is customizable to accommodate the imagination of any UX designer. Unfortunately, such I could’t find a framework that meets all those needs, and, no, I am not on a mission to build one.&lt;/p&gt;

&lt;p&gt;However, we do have many tools at our disposal. Can't we simply wire them all together? The short answer is no.&lt;/p&gt;

&lt;p&gt;If you choose to wire different libraries yourself, various problems arise, such as overlapping functionality and different assumptions in their usage.It is frustrating to piece together different libraries in the React ecosystem.&lt;/p&gt;

&lt;p&gt;Fortunately, I believe we have found a sane architecture that we are happy with.&lt;/p&gt;

&lt;p&gt;Before we dive into the details, let's establish our requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clear Separation of Concerns Without Side Effects:&lt;/strong&gt; As developers, we crave clarity, a map that shows us exactly where to lay down our code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability:&lt;/strong&gt; Our architectural design should gracefully expand as our application grows, whether it's composed of 2 or 142 components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testability:&lt;/strong&gt; Each layer should be testable in isolation, allowing for easy swaps if needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity:&lt;/strong&gt; Our architecture should possess a low cognitive load, drawing from familiar concepts. This simplicity should make learning, maintenance, and modifications a breeze.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementation Blueprint
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpmla27p0fynnwz3skcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpmla27p0fynnwz3skcd.png" alt="Client-side project layered architecture" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Project build foundation
&lt;/h3&gt;

&lt;p&gt;The popular &lt;a href="https://dev.to/ag2byte/create-react-app-is-officially-dead-h7o"&gt;create-react-app&lt;/a&gt; is deprecated. Other recommended tools like Next.js or Remix have many assumptions and conventions that are imposed on developers. While following the recommended flow with these tools can make the development process a pure joy, it requires a strong commitment that we don't want to make from the beginning.&lt;/p&gt;

&lt;p&gt;Instead, we have chosen &lt;a href="https://vitejs.dev/guide/" rel="noopener noreferrer"&gt;Vite&lt;/a&gt; with the React TypeScript template. Vite is a fast, non-opinionated build library that provides the necessary structure for our application without imposing any strict conventions. It is also much easier to configure compared to Webpack, and can be easily extended if necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Routing
&lt;/h3&gt;

&lt;p&gt;Since we don't follow the conventions, we will handle the routing ourselves. In the React ecosystem, there is only one library for that - &lt;a href="https://reactrouter.com/en/main" rel="noopener noreferrer"&gt;React Router&lt;/a&gt;. Specifically, we will use react-router-dom v6.&lt;/p&gt;

&lt;p&gt;However, we will not be using loader functions and actions as recommended. They impose more limitations on client-side applications - you cannot use react hooks inside the loader function, they control the flow of fetch and render with function calls and the  component, among other things.&lt;/p&gt;

&lt;p&gt;Using these functions makes more sense if you are using Remix, the full-stack framework written by the same authors as React Router. It is very powerful in that context.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Authentication
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://auth0.com/" rel="noopener noreferrer"&gt;Auth0&lt;/a&gt; is a popular user management tool with a great developer experience library for React. It simplifies login configuration, user management, and more.&lt;/p&gt;

&lt;p&gt;Auth0 also offers a powerful binding of their JavaScript library for React with the use of hooks.&lt;/p&gt;

&lt;p&gt;To start, we need to configure the Auth0 provider, which is a straightforward process.&lt;/p&gt;

&lt;p&gt;When we want certain routes to be accessible only for logged-in users, Auth0 provides a useful higher-order component. We can wrap it in a &lt;code&gt;Protected&lt;/code&gt; component and use it later in the router configuration as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;router&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createBrowserRouter&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;element&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Root&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;,&lt;/span&gt;
    &lt;span class="na"&gt;children&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;element&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Main&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;protected&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;children&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;element&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Protected&lt;/span&gt; &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;Internal&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Protected component is very simple too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;withAuthenticationRequired&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@auth0/auth0-react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Protected&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;component&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;
&lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ComponentType&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Component&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;withAuthenticationRequired&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;component&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Component&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When navigating to a specific route, auth0 checks if the user is logged in before rendering the component. If the user is not logged in, auth0 performs a full page redirect to the auth0 configuration page.&lt;/p&gt;

&lt;p&gt;Auth0 is also used to handle the last concern, which is obtaining an access token for the logged in user. We can use this access token in the &lt;code&gt;Authorization&lt;/code&gt; header when making API calls to our server.&lt;/p&gt;

&lt;p&gt;Once the user is logged in, we extract the access token and store it for future use.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. State management
&lt;/h3&gt;

&lt;p&gt;When your application grows, you will eventually need a way to manage fetched data from a server and the application state.&lt;/p&gt;

&lt;p&gt;There are many tools available for this purpose.&lt;/p&gt;

&lt;p&gt;We have chosen Zustand, a small and simple library based on Flux concepts. I won't go into much detail comparing it to other libraries, as there is a dedicated page for that &lt;a href="https://docs.pmnd.rs/zustand/getting-started/comparison" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I would like to specify the factors that were important to us in making this choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small API surface: Zustand essentially has only two concepts - store and actions. This means there is not much boilerplate code that developers have to write, such as wiring actions or reducers.&lt;/li&gt;
&lt;li&gt;Simplicity: It provides a hook and actions that mutate the initial state. That's it. There are no extra magical features like in Mobx.&lt;/li&gt;
&lt;li&gt;The store is a React hook and can also be accessed as a plain JavaScript object if needed.&lt;/li&gt;
&lt;li&gt;Easy scalability: As your application grows, you can split the store into separate slices with dedicated actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. API calls
&lt;/h3&gt;

&lt;p&gt;The final layer of our client-side architecture is the API. This is where we make calls to our server. The API layer interacts exclusively with the store (a vanilla object) and its actions. It is important to avoid any direct calls from components to maintain a clear separation between boundaries.&lt;/p&gt;

&lt;p&gt;The API layer should also be stateless and focus on providing minimal logical data manipulation. Data management should be handled by the data management layer.&lt;/p&gt;

&lt;p&gt;For making API calls, we create an Axios instance with an interceptor that takes an access token and adds it to every request sent to the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useStore&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../store&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;axiosInstance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;baseURL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VITE_SERVER_URL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/api`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;axiosInstance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;interceptors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getState&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;me&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Authorization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Bearer &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;axiosInstance&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So, what have we achieved? We have created a layered client-side architecture for a scalable project with clear boundaries. Each layer is separately testable. &lt;/p&gt;

&lt;p&gt;To get started, you can create a new GitHub repository using the &lt;a href="https://github.com/bolshchikov/sane-front-end-project-template" rel="noopener noreferrer"&gt;sane-front-end-project-template&lt;/a&gt;. It already includes all the mentioned components wired together.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>architecture</category>
      <category>react</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Why Software Engineers Aren't Going Anywhere, Even with AI</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Wed, 23 Aug 2023 12:45:24 +0000</pubDate>
      <link>https://dev.to/bolshchikov/why-software-engineers-arent-going-anywhere-even-with-ai-4i3d</link>
      <guid>https://dev.to/bolshchikov/why-software-engineers-arent-going-anywhere-even-with-ai-4i3d</guid>
      <description>&lt;p&gt;There's been a growing buzz about AI taking over software development in the next decade or so. But let's dig deeper and understand why this doesn't mean we're on the verge of being replaced. In this post, we'll explore the reasons why software engineers aren't going anywhere, even in the AI age.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI's Influence and the Unchanged Essence of Software Engineering
&lt;/h3&gt;

&lt;p&gt;The AI wave is indeed changing the game with tools like GitHub's Co-pilot. These tools boost our efficiency and save a ton of time and money. And there's cool stuff on the horizon, like self-healing code, that'll make our software better and quicker to produce. It's an exciting time, no doubt.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Recent Experience: A Peek into Our World
&lt;/h3&gt;

&lt;p&gt;I recently took on a project from scratch to finish, and it got me thinking. I compiled a list of technologies that we, as software engineers, need to be familiar with to create a simple app. Here's a snippet:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating the look (HTML)&lt;/li&gt;
&lt;li&gt;Adding style (CSS)&lt;/li&gt;
&lt;li&gt;Making it work (JavaScript)&lt;/li&gt;
&lt;li&gt;Handling browsers (Browser stuff)&lt;/li&gt;
&lt;li&gt;Navigating within the app (Routing)&lt;/li&gt;
&lt;li&gt;Managing data flow (State management)&lt;/li&gt;
&lt;li&gt;Designing the interface (UI libraries)&lt;/li&gt;
&lt;li&gt;Linking with other apps and services (APIs)&lt;/li&gt;
&lt;li&gt;Speeding things up (Caching)&lt;/li&gt;
&lt;li&gt;Picking a backend language (Python, JavaScript, etc.)&lt;/li&gt;
&lt;li&gt;Building the backend (Server framework)&lt;/li&gt;
&lt;li&gt;Juggling tasks (Async programming)&lt;/li&gt;
&lt;li&gt;Ensuring security (Security basics)&lt;/li&gt;
&lt;li&gt;User access (User authentication)&lt;/li&gt;
&lt;li&gt;Storing data (Database)&lt;/li&gt;
&lt;li&gt;Team coordination (Version control)&lt;/li&gt;
&lt;li&gt;Packaging the app (Docker)&lt;/li&gt;
&lt;li&gt;Getting it out there (Deployment)&lt;/li&gt;
&lt;li&gt;Checking it works (Testing)&lt;/li&gt;
&lt;li&gt;Speeding up delivery for end users (CDN)&lt;/li&gt;
&lt;li&gt;Handling addresses (DNS and networking)&lt;/li&gt;
&lt;li&gt;Utilizing the cloud (Cloud architecture)&lt;/li&gt;
&lt;li&gt;Domain management&lt;/li&gt;
&lt;li&gt;Sending emails (Email service setup, reputation and warm-up)&lt;/li&gt;
&lt;li&gt;Respecting privacy (Privacy concerns)&lt;/li&gt;
&lt;li&gt;Monitoring performance (Monitoring)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's a hefty list, right? Sure, AI tools like ChatGPT can help with some. Today's tools let us do more with a smaller team than before.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Real Value We Bring
&lt;/h3&gt;

&lt;p&gt;Here's the scoop: being a software engineer isn't just about knowing tools. It's about fitting these tools together seamlessly. It's like cooking up a dish with various ingredients. As we improve at this, we innovate fresh ways to solve issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Bright Future Ahead
&lt;/h3&gt;

&lt;p&gt;The idea that AI will replace us isn't the full picture. In the next 10 to 15 years, we'll still be here, doing our thing. Instead of fading away, we'll focus more on tackling tough challenges in smarter ways. As technology advances, so will our skills to make great things happen.&lt;/p&gt;

&lt;p&gt;In a nutshell, the AI storm won't wipe us out. While AI tools are handy, the blend of skills, creativity, and problem-solving we possess is hard to replicate. So, brace yourselves – the future of software engineering is a thrilling ride!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>As a software engineer, will I have a job in several years?</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Wed, 07 Jun 2023 14:00:00 +0000</pubDate>
      <link>https://dev.to/bolshchikov/as-a-software-engineer-will-i-have-a-job-in-several-years-5g2h</link>
      <guid>https://dev.to/bolshchikov/as-a-software-engineer-will-i-have-a-job-in-several-years-5g2h</guid>
      <description>&lt;p&gt;During a recent OpenAI roadshow at Tel Aviv University, a software engineering graduate student posed an important question to Sam Altman: "As a software engineer, will I have a job in several years?" It reflects the growing concerns about the impact of artificial intelligence (AI) on the job market. In this blog post, we will explore the evolving landscape of software engineering and shed light on how the role of software engineers may transform in the face of advancing AI technologies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Implementation vs. Essence
&lt;/h2&gt;

&lt;p&gt;To understand the potential influence of AI on software engineering, it's essential to distinguish between code implementation and its essence. While code implementation refers to the specific technical details and tools used to build software, the essence encompasses the broader concepts and architectural decisions behind it. &lt;/p&gt;

&lt;p&gt;For example, message queue is an essence to support async communication between different services. Kafka including the usage of its API is the implementation detail. Due to different technical reasons, it’s possible to substitute Kafka with RabbitMQ. The implementation will change but the essence of message queue has remained.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI's Growing Role in Code Implementation
&lt;/h2&gt;

&lt;p&gt;AI has already started making its mark on code implementation. Tools like GitHub's Co-Pilot have demonstrated the ability to generate code snippets and assist developers in writing code more efficiently. As AI becomes increasingly proficient at writing code, developers may find themselves relying more on these AI-powered tools to handle implementation details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shifting Focus to the Essence
&lt;/h2&gt;

&lt;p&gt;As AI takes over some aspects of code implementation, software engineers can redirect their focus towards the essence of software development. This entails delving into software architecture, designing bigger building blocks, and making critical decisions regarding the overall structure of the software. Instead of spending significant time on writing lines of code, developers will have the opportunity to engage in more strategic and creative thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Continued Need for Computer Science Knowledge
&lt;/h2&gt;

&lt;p&gt;While AI may automate certain coding tasks, the fundamental knowledge of computer science remains crucial for software engineers. Understanding concepts such as complexity, DB schemas and their design, and security risks will continue to be essential. However, developers may no longer need to implement low-level algorithms from scratch, as AI can assist in generating optimized solutions. It is worth noting that many developers already rely on existing resources like Wikipedia and Stack Overflow for code snippets.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Impact of AI on Software Engineering
&lt;/h2&gt;

&lt;p&gt;The impact of AI on software engineering is expected to follow the path of abstraction. Much like the progression from Assembly language to high-level programming languages like Python, AI's role will elevate abstraction in the development process. Software engineers will still be in demand, but their responsibilities will shift towards higher-level tasks that leverage AI-generated code. This transition will enable developers to focus on designing sophisticated systems, ensuring robust software architecture, and effectively utilizing AI tools to enhance productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we contemplate the future of software engineering in the era of AI, one thing is clear: software engineers will continue to play a vital role in the industry. While AI may take over certain aspects of code implementation, the essence of software development and the need for strategic decision-making will remain essential. Software engineers must adapt to these changes by embracing AI-powered tools and leveraging their expertise to contribute to higher-level tasks. The future of software engineering holds exciting possibilities, and it's up to us to navigate this evolving landscape with enthusiasm and adaptability.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>career</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>How software architecture changes in the era of AI</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Mon, 05 Jun 2023 14:20:00 +0000</pubDate>
      <link>https://dev.to/bolshchikov/how-software-architecture-changes-in-the-era-of-ai-363f</link>
      <guid>https://dev.to/bolshchikov/how-software-architecture-changes-in-the-era-of-ai-363f</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of software development, architecture plays a crucial role in shaping the overall structure and design of a system. Traditionally, architecture discussions have followed an outside-in approach, where architects and decision-makers envision the future architecture and communicate their ideas to the development team. However, this standard approach often falls short of delivering the desired results. In this blog post, we explore the limitations of the outside-in approach and propose alternative strategies to improve the architecture design process.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pitfalls of the Outside-In Approach
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Starting from the blank canvas every time&lt;/strong&gt;:&lt;br&gt;
In reality, software development rarely involves starting from scratch. Instead, we build upon existing architectures with the goal of enhancing and improving them. When architects and decision-makers attempt to invent the future architecture without sufficient familiarity with the current codebase, their assumptions may be inaccurate or flawed. This lack of context can lead to suboptimal design decisions that hinder the development process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Communication Gaps&lt;/strong&gt;:&lt;br&gt;
Effective communication between decision-makers and engineers is vital for successful architecture implementation. However, when architects pass down architecture requirements through documentation or verbal means, information can be lost or misinterpreted along the way. This breakdown in communication can result in a misalignment between the envisioned architecture and its actual implementation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lack of Feedback Loop&lt;/strong&gt;:&lt;br&gt;
One significant drawback of the outside-in approach is the absence of a proper feedback loop. Architects, team leads, and other stakeholders often lack a non-intrusive method to validate and review the implemented architecture. As a result, mistakes and issues may only be identified late in the development cycle, leading to costly and time-consuming rework.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Redefining the Architecture Design Process
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generating High Level Architecture from Code&lt;/strong&gt;:
To overcome the limitations of the outside-in approach, one alternative is to generate the architecture directly from the codebase. The code itself represents the ultimate source of truth, providing insights into the existing structure, dependencies, and patterns. By analyzing the codebase automatically, engineers can gain a deeper understanding of the architecture and make informed decisions for further enhancements.
Some tools are getting close to doing so. For example, &lt;a href="https://marketplace.visualstudio.com/items?itemName=archsense.architecture-view-nestjs&amp;amp;ssr=false#overview" rel="noopener noreferrer"&gt;architecture-view-nestjs&lt;/a&gt; generates 2 levels of architecture for NestJs applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3llvzbxwiiqtgx68r8n.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3llvzbxwiiqtgx68r8n.gif" alt="Architecture View NestJS visualization" width="600" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leveraging AI for Architecture Generation&lt;/strong&gt;:&lt;br&gt;
Then, we can enhance the deduced architecture by leveraging artificial intelligence (AI). With the wealth of existing architectural patterns and best practices, training AI models like GPT on this knowledge can enable them to generate new and innovative architectural solutions. By tapping into AI's capabilities, architects can benefit from an augmented design process, expanding the scope of possibilities and ensuring comprehensive exploration of potential architectures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Architecture Updates&lt;/strong&gt;:&lt;br&gt;
To facilitate better collaboration and feedback, it is essential to establish a near real-time architecture view. This view would enable architects, team leads, and stakeholders to monitor the progress of architecture implementation throughout the development process. By having a clear and up-to-date understanding of the evolving architecture, early detection of implementation issues becomes possible, allowing for timely intervention and course correction.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The standard outside-in approach to architecture discussions often falls short due to its detachment from the existing codebase, communication gaps, and the lack of a feedback loop. By embracing alternative strategies, such as deducing architecture from code, leveraging AI for generating architectural solutions, and establishing real-time architecture updates, we can revolutionize the architecture design process. These new approaches encourage a more holistic understanding of the system, enable better collaboration between decision-makers and implementers, and empower teams to detect and rectify issues early on. Ultimately, by reimagining how we approach architecture discussions, we can unlock the potential for more robust, efficient, and successful software systems.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>future</category>
    </item>
    <item>
      <title>Why measuring experience in years is not good enough</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Mon, 01 May 2023 14:14:55 +0000</pubDate>
      <link>https://dev.to/bolshchikov/why-measuring-experience-in-years-is-not-good-enough-1j8i</link>
      <guid>https://dev.to/bolshchikov/why-measuring-experience-in-years-is-not-good-enough-1j8i</guid>
      <description>&lt;p&gt;Experience plays a tremendous role in any professional career and at the end of the day, every employer is looking for the best, usually the most experienced, people. A common way to measure experience is in years. However, I would argue that such an approach might be misleading.&lt;/p&gt;

&lt;p&gt;Here I want to offer an alternative approach to evaluating experience and provide some practical questions that you can use to aid your understanding of what experience truly is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1. The definition.
&lt;/h2&gt;

&lt;p&gt;Let’s start by looking at the definition. &lt;a href="https://www.merriam-webster.com/dictionary/experience" rel="noopener noreferrer"&gt;According to Merriam-Webster&lt;/a&gt;, the experience can be defined as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;a: direct observation of or participation in events as a basis of knowledge&lt;/p&gt;

&lt;p&gt;b: the fact or state of having been affected by or gained knowledge through direct observation or participation&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Simply, experience is gained from direct observation and/or participation. Therefore, the more we encounter different types of problems and their solutions, the more our experience grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2. Real-life.
&lt;/h2&gt;

&lt;p&gt;But what happens in real life? Most of the actions in our daily jobs are so abstract that we just repeat them over and over again. They are usually simplistic tasks, for example, workers at an assembly line, each one doing one part of an overall product but not able to act on the bigger picture. By dividing work into repetitive tasks, the overall business is efficient, but it may hinder attempts to gain broader experience.&lt;/p&gt;

&lt;p&gt;What is true for those on industrial assembly lines is true for many software engineers. We are surrounded by infrastructure, libraries, and frameworks that assist us in carrying out complex tasks. This allows us to produce code quickly, and all we are left to do is technical design and its implementation. &lt;/p&gt;

&lt;p&gt;Solving new problems and implementing innovative features can be extremely rewarding and stimulate our intellect. Encountering and dealing with the same time of problems can also create a depth of understanding and expertise. However, there is a threshold, that when crossed, means that repeated tasks become mind-numbing and fail to stimulate our creativity. At these times, we need to switch to new challenges. The problem is that many of us are stuck at that repetitive point and count this as additional experience. So while experience measured in time may be growing, experience measured in intellectual growth may have stalled. &lt;/p&gt;

&lt;h2&gt;
  
  
  Part 3. The valid alternative.
&lt;/h2&gt;

&lt;p&gt;True experience comes from solving different types of problems in a variety of ways. &lt;/p&gt;

&lt;p&gt;For example in a set of two candidates, from one perspective you could say that one who has worked for five years has more experience than the other who has worked for three years. However, if we assessed them more qualitatively, we might find that the candidate who has worked for three years has been exposed in that time to a wider diversity of problems.&lt;/p&gt;

&lt;p&gt;This is the reason why software engineers who join startups in the early stages are more likely to gain significantly more familiarity with diverse areas while perhaps compromising on knowledge depth as a result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 4. The practical way
&lt;/h2&gt;

&lt;p&gt;I believe it’s fair to say that every position has a finite set of types of problems that one can encounter. For example, one way, but certainly not the only one, to categorize the problems of software engineering would be in the following way, in order of complexity:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Investigating and solving a bug&lt;/li&gt;
&lt;li&gt;Implementing a feature within given specifications&lt;/li&gt;
&lt;li&gt;Designing and implementing a feature according to a product specification&lt;/li&gt;
&lt;li&gt;Architecting the solution across different boundaries (e.g. front-end, back-end, devops)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thus, if we were evaluating experience in terms of diversity of experience rather than simply quantity of time, we might wish to understand the level of complexity a potential employee has been exposed to. This can be achieved by probing candidates more about the nature of their work experience. From my own experience, I have found that the majority of full-stack developers have experience working with the first and second categories above, while the best candidates have experience in three or more of the categories. &lt;/p&gt;

&lt;p&gt;I also make sure to ask candidates a more subjective question, asking them to tell me what the most challenging problem they had to face in their career was or the problem-solving process that they are the most proud of. The answers to this question are a good indicator of the edge of a candidate’s experience and in ideal situations, their answers would fall in the third of the fourth category of problem-solving.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Part 5. Conclusion
&lt;/h2&gt;

&lt;p&gt;If you are looking to optimize your professional growth, then seeking out work environments with fast-growth rates can help give you the space to gain a wider diversity of experience. Such places, for example, a startup that grows from hundreds to thousands of users within a short timeframe, is likely to experience many new and complex constraints and problems to solve. Thus, within these environments, you’ll be able to gain not just quantity of experience, but a true depth and range of experience too, which will make you a valuable asset for any company.&lt;/p&gt;

</description>
      <category>career</category>
      <category>developer</category>
      <category>growth</category>
      <category>beginners</category>
    </item>
    <item>
      <title>4 strategic choices that may boost your career in software engineering</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Sat, 29 Apr 2023 19:33:13 +0000</pubDate>
      <link>https://dev.to/bolshchikov/4-strategic-choices-that-may-boost-your-career-in-software-engineering-1il2</link>
      <guid>https://dev.to/bolshchikov/4-strategic-choices-that-may-boost-your-career-in-software-engineering-1il2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Disclaimer: it's a very opinionated post!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's say you have 5 years of experience as a software engineer and you come to an interview for a senior position. To your surprise, you don't get the job because you lack experience as you were told. How come?&lt;/p&gt;

&lt;p&gt;Honestly, professional experience has very little to do with the number of years you've been in business. Instead, it's the variety of experience you gain on the way.&lt;/p&gt;

&lt;p&gt;I've interviewed over 100 people within just 2 years, and it's easy to spot candidates who planned their careers early on.&lt;/p&gt;

&lt;p&gt;So what do successful candidates do differently?&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Choose the right company over a title
&lt;/h2&gt;

&lt;p&gt;Let's face it, landing your first job is hard. No pressure, but it also plays a significant role in your future career.&lt;/p&gt;

&lt;p&gt;Usually, at that point, the main choice is between a small startup (maybe with a fancy title) or a big established company. I don't think that choosing your company by size is right.&lt;/p&gt;

&lt;p&gt;Here are 3 signs the company is a good fit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They are willing to invest in your professional growth;&lt;/li&gt;
&lt;li&gt;They are a team of smart people who you will interact with daily;&lt;/li&gt;
&lt;li&gt;They hold you accountable for your actions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's dig a little deeper into each of the above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Investment in your professional growth
&lt;/h3&gt;

&lt;p&gt;In a nutshell, it means the company has practices in place that contribute to the professional growth of employees. These practices don't have to be formal, such as courses or lectures. Simple things work well too: regular code review, involving junior engineers into design and architecture reviews. All these indicate a good engineering culture that aims to grow engineers and not only ship features or fix bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Great people
&lt;/h3&gt;

&lt;p&gt;A good team leader builds a team of engineers of different levels. A balanced team has a spectrum of engineers, e.g. 1 junior, 2 mid, and 2 senior engineers for a team of five.&lt;/p&gt;

&lt;p&gt;Why should you care about it?&lt;/p&gt;

&lt;p&gt;With experienced colleagues you get to read clean code and understand the logic behind these lines. It's like learning to cook by looking at our meal - you can guess some parts but it doesn't mean you can cook it.&lt;/p&gt;

&lt;p&gt;Daily interactions with senior colleagues will teach you to analyze different approaches, reason about them, and make the right technical decisions. It's a skill that usually takes years to develop. Having experienced colleagues around you can significantly shorten this learning time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Holding you accountable
&lt;/h3&gt;

&lt;p&gt;We act differently when we know that our ass is on the line. On the one hand, such situations put you under pressure and intuitively we want to avoid them.&lt;/p&gt;

&lt;p&gt;On the other hand, such situations allow tremendous growth because they force you to operate outside of your comfort zone. Good companies know that and nurture the culture of accountability (usually also backed up with the safety nets).&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Become an expert in your domain
&lt;/h2&gt;

&lt;p&gt;Software engineering has many specialties: front-end, back-end, DevOps, databases, and many others. Many companies are hiring a full-stack though - what should you do?&lt;/p&gt;

&lt;p&gt;Young engineers face this dilemma quite often - should I know a little front-end, back-end, and one type of database, or focus on just one area, say, front-end?&lt;/p&gt;

&lt;p&gt;As you might guess from the title, I believe it's better to be an expert in at least one area than being a surface learner. Every product has a period of weird behavior before the stack is mature enough. For example, NodeJS server fails, say, once every 3 days, your database gets random timeouts on connections, or your modal dialog keeps getting stuck. Fixing inconsistent behavior requires a solid understanding of underlying technology and experience to have a hunch where to look for.&lt;/p&gt;

&lt;p&gt;When such moments come, experts are in the limelight and save the day. This is why established companies are interested in hiring more experts rather than generalists.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Ship a product
&lt;/h2&gt;

&lt;p&gt;That feeling when you see customers using the product that you've built is extraordinary. It's also a great responsibility and requires you to be more than just a coder.&lt;/p&gt;

&lt;p&gt;To feel this, you need to master the disciplines you’re not necessarily an expert in. These often include understanding product value, task prioritization, engineering, UX, QA, customer support, and feedback, you name it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Real artists ship"&lt;br&gt;
-Steve Jobs&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having a proven record of a shipped product demonstrates your ability to get shit done and serves as a strong indicator of your abilities to the hiring manager.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Don't do more of the same
&lt;/h2&gt;

&lt;p&gt;Your true experience comes from the types of problems you solve. You can have 7 years of experience, but it’s not so relevant if you solve the same problem over and over again, e.g. building the same type of application or the same type of website (landing pages). You’ll get exceptional at these, but it will be hard for you to change the job. Most likely your next position will be almost the same but in a different company. Why? Because you have 7 years of experience solving the same type of problem and you are damn good at that.&lt;/p&gt;

&lt;p&gt;As an alternative, you can have just 3 years of experience, but these 3 years are a rollercoaster - you get to build an application, set testing infrastructure, improve monitoring, then build a different type of application, and so on. Combining your domain expertise with a handful of problems you solve on the way gives you the true experience.&lt;/p&gt;

&lt;p&gt;Being an expert and solving different types of problems makes you experienced&lt;/p&gt;

&lt;p&gt;You might notice, I didn't mention the number of years of experience at all. You may meet a person with 2-3 or even 4 years of experience and be amazed by their achievements. Most likely, these 2-3-4 years have been very rich with all sorts of engineering problems, solving which this person grew to be this outstanding professional you want to be.&lt;/p&gt;

&lt;p&gt;Stick for a while but leave early if it’s not a good fit.&lt;/p&gt;

&lt;p&gt;It all takes time. Understanding the culture, learning architecture, building your reputation - it doesn't happen overnight.&lt;/p&gt;

&lt;p&gt;Depending on the size of the company, this time might vary. As a rule of thumb, for the mid-size company, it takes about 3-6 months to onboard an employee and up to a year for a new employee to bring some meaningful results.&lt;/p&gt;

&lt;p&gt;Sometimes, a person switches jobs every year or two. If this is you - it’s okay, but be ready to give a good reason why this happened. &lt;/p&gt;

&lt;p&gt;Sometimes, it just doesn't feel right. Try to understand what's wrong and try to work it out. Try to be objective and see the full picture. There's a very good chance things will improve.&lt;/p&gt;

&lt;p&gt;Sometimes, things don’t work out. And that's okay too. Accept it and move on. No point in staying to collect "the years of experience".&lt;/p&gt;

&lt;p&gt;Still have any doubts about career choice? Reach out to me. Seriously, I mean it. We can schedule a call and discuss it further.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>career</category>
      <category>motivation</category>
    </item>
    <item>
      <title>Building a Successful Junior Software Engineering Career: A Path to Success</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Thu, 27 Apr 2023 11:28:00 +0000</pubDate>
      <link>https://dev.to/bolshchikov/building-a-successful-junior-software-engineering-career-a-path-to-success-3n81</link>
      <guid>https://dev.to/bolshchikov/building-a-successful-junior-software-engineering-career-a-path-to-success-3n81</guid>
      <description>&lt;p&gt;Throughout my career, I have conducted more than 700 professional interviews for various software engineering positions.&lt;/p&gt;

&lt;p&gt;Successful candidates surprisingly share the same trait: the beginning of their careers was incredibly similar. I believe that when you are just starting, there should not be too much freedom, since tackling the right problems will define your path. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage 1: Learning to write production-grade code&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Your main goal at the beginning of software engineer career is to learn how to transform your “writing code” abilities to “writing code for real users (production)”. This requires a lot more than just committing a code to a version control system. The list might vary between different companies, but in a nutshell, you should learn the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Code readability - how to write code for others to read&lt;/li&gt;
&lt;li&gt;Teamwork - how to work in a team of other developers and QA&lt;/li&gt;
&lt;li&gt;Code reviews&lt;/li&gt;
&lt;li&gt;Pull requests&lt;/li&gt;
&lt;li&gt;Development tasks/tickets/issues&lt;/li&gt;
&lt;li&gt;Tests - how to write testable code and &lt;a href="https://martinfowler.com/articles/practical-test-pyramid.html" rel="noopener noreferrer"&gt;learn different types of tests&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Deployments - how your code is deployed to production&lt;/li&gt;
&lt;li&gt;Monitoring - how to monitor your code in production and make sure it works as intended&lt;/li&gt;
&lt;li&gt;Coping with production problems (urgents) - how to handle problems in production that happen to real users&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Countless books have been written on each of the topics above. While we won’t dive into them here, you must invest some time into learning more about all of them. This is the foundation to become a successful software engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage 2: Learn how to build simple architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When you feel comfortable delivering code to production, we can go up a level: learning to design software architecture solutions. The best way to accomplish this is to start small and ask your team leader to give you less defined tasks. This creates an opportunity for you to brainstorm the potential solution, design it, discuss it with senior peers, receive feedback, and finally, to implement it. The emphasis at this stage is on suggesting solutions and receiving feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage 3: Learn to design subsystems or their parts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This stage is a logical continuation of the previous one: we are moving from features to designing important parts of the system. This requires a deep understanding of the whole picture (system) and how the part in question, the one you are working on, fits into it.&lt;br&gt;
The course of action is similar to the previous stage, in that you will provide solutions and receive feedback. However, now the scope of the project has grown. You should consider the overall architecture of the part, its API communication with other parts of the system, using industry standards solutions, different design patterns, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage 4: Build service from zero to production&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There’s no substitute for the experience of building a service from the ground up and delivering it to production. This is a rare opportunity, especially in well-established engineering companies. Thus, whenever you have the chance, be sure to jump on it!&lt;br&gt;
What should you expect here? Ouch…. quite a lot.&lt;br&gt;
• You should learn to evaluate different approaches to solve the problem you have, taking into account their advantages, disadvantages, and cost of implementation.&lt;br&gt;
• You should also experience how to deploy a new service to production.&lt;br&gt;
• If, in all previous cases, you could remain in one domain, e.g., front-end or back-end, at this stage, you should be able to cross them and start writing code in both of them, including DevOps for the deployment part.&lt;br&gt;
• You have a chance to define and/or adopt coding standards for your project.&lt;br&gt;
• Last but not least, you’ll need to understand how to monitor the behavior of the service in production and the load, as well as what metrics and KPIs to look at.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage 5: Build a product&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Whether you are in a startup or your well-established company is discovering a new possibility, you should participate in building a product. You may be wondering,  “How is it different from stage 1?”&lt;br&gt;
We are always contributing code to some product. At this stage, you must shift your concentration from writing a good code to building a good product.&lt;br&gt;
To achieve that, what do you pay attention to here?&lt;br&gt;
You work with the product team is mainly where you will direct your energy. Why are you working on this or that feature? What customer problem does it solve? Have you considered all feasible edge cases? How will a certain feature perform under load?&lt;br&gt;
Those are just a few among the many questions engineers should ask product people and themselves, too.&lt;br&gt;
This is the step where engineering and business value finally interact.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage 6: From here to infinity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The easy part is now over. This journey usually takes around 3-5 years. This is when the ‘hard’ part begins. Nobody can tell you what your goals should be but you. You will have to decide for yourself whether you want to try the management path or go deeper into engineering craftsmanship.&lt;br&gt;
Whatever you decide, the stage that starts here has no end.&lt;/p&gt;

</description>
      <category>career</category>
      <category>beginners</category>
      <category>discuss</category>
      <category>learning</category>
    </item>
    <item>
      <title>A shortcut to become a go-to engineer in your team</title>
      <dc:creator>Sergey Bolshchikov</dc:creator>
      <pubDate>Mon, 24 Apr 2023 14:19:08 +0000</pubDate>
      <link>https://dev.to/bolshchikov/a-shortcut-to-become-a-go-to-engineer-in-your-team-32mi</link>
      <guid>https://dev.to/bolshchikov/a-shortcut-to-become-a-go-to-engineer-in-your-team-32mi</guid>
      <description>&lt;p&gt;There are very few people on the team who know, with the necessary depth, how things work. They are usually a “go-to” for the other team members and are often involved in strategic architectural decisions of the team. They are also usually the most seasoned team members and witnessed a lot of code being written. Does it mean that you have to wait for several years to get to that place? The answer is not necessarily if you invest in your knowledge early on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why understand software architecture as soon as you can?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Communication&lt;/strong&gt;: Software architecture communicates the software system's design and structure to stakeholders, ensuring better collaboration and reducing the chances of misunderstandings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality and Performance&lt;/strong&gt;: Understanding software architecture is essential for ensuring the quality and performance of software systems by identifying potential issues and bottlenecks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planning and Design&lt;/strong&gt;: Understanding software architecture helps to plan and design software systems, ensuring they are scalable and adaptable.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  We don’t see it the same way
&lt;/h2&gt;

&lt;p&gt;Having different levels of experience, knowledge, and seniority can lead to problems in software architecture as it can cause misunderstandings and communication issues between team members. For instance, those with less experience may not be able to understand the technical details provided by experienced team members, leading to confusion and misinterpretation of the software design. Similarly, senior team members may not be able to effectively communicate the impact of architectural decisions on the organization's goals to junior team members, leading to a lack of alignment and direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualize product architecture in real time
&lt;/h2&gt;

&lt;p&gt;Having a tool that visualizes product architecture in almost real time, where everyone sees the same picture, can produce a significant impact on software quality and improve communication&lt;/p&gt;

&lt;p&gt;Let's take the popular open-source project &lt;a href="https://github.com/ToolJet/ToolJet" rel="noopener noreferrer"&gt;ToolJet&lt;/a&gt; as an example.&lt;/p&gt;

&lt;p&gt;It's immediately obvious how complex the Tooljet project is. Analyzing its code and understanding its architecture can take several hours. However, you can use a &lt;a href="https://marketplace.visualstudio.com/items?itemName=archsense.architecture-view-nestjs&amp;amp;ssr=false#overview" rel="noopener noreferrer"&gt;VSCode extension&lt;/a&gt; that visualizes everything at once with the necessary level of abstraction. Seeing the high-level picture allows you to spot potential problems well before they become significant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3azr9bvqnf1871i8nbca.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3azr9bvqnf1871i8nbca.gif" alt="Visualization by Architecture View NestJS" width="600" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Credit: Visualized by &lt;a href="https://marketplace.visualstudio.com/items?itemName=archsense.architecture-view-nestjs&amp;amp;ssr=false#overview" rel="noopener noreferrer"&gt;Architecture View NestJS&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What does it improve?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;When &lt;strong&gt;entering a new or legacy code base&lt;/strong&gt;, visualizing the product architecture can help you understand the system's structure quickly, making it easier to navigate and locate relevant sections of code.&lt;/li&gt;
&lt;li&gt;When &lt;strong&gt;onboarding new employees&lt;/strong&gt;, visualizing the product architecture can help them get up to speed with the system's design and structure, ensuring they are better equipped to contribute to the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keeping up with changes&lt;/strong&gt; in the system can be challenging, but visualizing the product architecture can help you identify potential impacts of any changes, ensuring you can make informed decisions and minimize the risk of introducing new issues.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Visualizing the architecture and seeing how code changes impact overall architecture can easily allow you to become another “go-to” person for your team.&lt;/p&gt;

</description>
      <category>nestjs</category>
      <category>architecture</category>
      <category>productivity</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
