<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrew Panfilov</title>
    <description>The latest articles on DEV Community by Andrew Panfilov (@andrew_panfilov).</description>
    <link>https://dev.to/andrew_panfilov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andrew_panfilov"/>
    <language>en</language>
    <item>
      <title>Chatbot Prototype: Architectural Proposal</title>
      <dc:creator>Andrew Panfilov</dc:creator>
      <pubDate>Wed, 22 May 2024 11:46:23 +0000</pubDate>
      <link>https://dev.to/andrew_panfilov/chatbot-prototype-architectural-proposal-j5n</link>
      <guid>https://dev.to/andrew_panfilov/chatbot-prototype-architectural-proposal-j5n</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;This article presents an example of an architectural proposal for creating an intelligent chatbot prototype with LLM. It can serve as useful learning material for those who have never written such documents, and for those who have written similar proposals, it may enrich their experience. In any case, I would be glad to hear readers' comments and discuss what should be added to or corrected in this document.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Definition
&lt;/h2&gt;

&lt;p&gt;For a large university, it is necessary to create a chatbot to which employees can ask questions in free form regarding students, attendance, statistics, aggregation, and various database queries about students. The university attempted to create a system based on Text-to-SQL on its own. However, in three out of ten cases, the generated SQL query returned incorrect results, and in seven out of ten cases, it returned correct results. This accuracy level does not satisfy the university, so a prototype chatbot needs to be created to respond to university employees’ questions about students with 100% accuracy. This proposal describes a system capable of providing the most accurate information about students from the university database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analysis
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Terms and Acronyms
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Component&lt;/strong&gt; refers to a runtime entity, it can be deployed independently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployable artifact&lt;/strong&gt; is the application code as it runs on production: compiled, built, bundled, minified, optimized, and so on. Most often, it's a single binary or a bunch of files compressed in an archive. One can store and version an artifact. An artifact should be configurable in order to be deployable in any environment. For example, if one needs to deploy to staging and production servers, one should be able to use the same artifact. Only the configuration must change, not the artifact itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personal Identifiable Information (PII)&lt;/strong&gt; refers to any data that could potentially identify a specific individual. This information can include direct identifiers, such as name, social security number, driver’s license number, and passport number, which can directly recognize an individual. It also encompasses indirect identifiers when combined with other information, such as date of birth, place of birth, and mother’s maiden name, that can be used to deduce an individual's identity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; is a set of a platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container&lt;/strong&gt; is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose&lt;/strong&gt; is a tool designed for defining and orchestrating multi-container Docker applications. It allows users to use a YAML file to configure an application’s services, networks, and volumes. By utilizing a single command, users can then initiate and activate all the services outlined in their configuration. It streamlines the deployment process, enabling the management of the entire lifecycle of an application through straightforward commands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL query&lt;/strong&gt; is a request for data or information from a database table or combination of tables in SQL (Structured Query Language). This language is used for managing and manipulating relational databases. SQL queries can perform a variety of tasks, including retrieving specific information from a database, inserting new data, updating existing data, and deleting data. Queries are constructed using specific SQL syntax and can range from simple commands to retrieve all records from a database table to complex queries involving multiple operations like joins, subqueries, and aggregate functions to extract and manipulate data according to specific requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chatbot&lt;/strong&gt; is a software application designed to simulate conversation with human users, especially over the Internet. Chatbots can function in a wide range of environments, including websites, mobile apps, messaging platforms, and telephone services. They are used for various purposes such as customer service, information acquisition, and entertainment. Chatbots can be simple, based on pre-defined scripts to handle specific tasks, or more complex, utilizing natural language processing and artificial intelligence to engage in more open-ended conversations and learn from interactions to improve their responses over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snowflake&lt;/strong&gt; database refers to Snowflake Inc.'s cloud-based data warehousing service. It is a fully managed service that allows organizations to store and analyze data using cloud-based hardware and software. Snowflake's architecture is unique because it separates storage and compute, enabling users to scale storage and compute independently. This means organizations can adjust their storage capacity and computational power based on their current needs without affecting the other.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SPA&lt;/strong&gt; (Single Page Application) is a web application or website that interacts with the user by dynamically rewriting the current page rather than loading entire new pages from the server. This approach enables a more fluid and faster user experience, as it minimizes the amount of data transferred between the server and the client, reduces loading times, and provides a seamless interaction similar to desktop applications. SPAs use AJAX and HTML5 to asynchronously load content and update the webpage with new data from the web server, without the need for page refreshes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;This article uses system-centric requirements, which are not the same as user-centric user stories and may not contain any business value. At the same time, it is not a contradiction or opposite approach to user-centric user stories – a user story that mentions business value may contain several system-centric requirements.&lt;/p&gt;

&lt;p&gt;Additionally, it is essential to understand the difference between functional and non-functional requirements.&lt;/p&gt;

&lt;p&gt;From &lt;a href="https://en.wikipedia.org/wiki/Non-functional_requirement:"&gt;https://en.wikipedia.org/wiki/Non-functional_requirement:&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture, because they are usually architecturally significant requirements.&lt;/p&gt;

&lt;p&gt;Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be. Functional requirements are usually in the form of "system shall do ", an individual action or part of the system, perhaps explicitly in the sense of a mathematical function, a black box description input, output, process and control functional model or IPO Model. In contrast, non-functional requirements are in the form of "system shall be ", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed.&lt;/p&gt;

&lt;p&gt;Non-functional requirements are often called "quality attributes" of a system. Other terms for non-functional requirements are "qualities", "quality goals", "quality of service requirements", "constraints", "non-behavioral requirements", or "technical requirements". Informally these are sometimes called the "ilities", from attributes like stability and portability. Qualities—that is no n-functional requirements—can be divided into two main categories:&lt;/p&gt;

&lt;p&gt;1) Execution qualities, such as safety, security and usability, which are observable during operation (at run time).&lt;/p&gt;

&lt;p&gt;2) Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the static structure of the system.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Written once, read many times

&lt;ol&gt;
&lt;li&gt;Specifications of an IT system (like other texts), are written once and are read many times over the lifespan of the system. Hence two observations become clear: reading time matters more than writing time, and upfront quality is free. I.e. it makes sense to optimize the text for readability and comprehensibility, even if that means the author has to put in more effort.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;No Nobel prize in literature

&lt;ol&gt;
&lt;li&gt;Specifications are technical texts written for one purpose only: the successful transfer of meaning from one person to another, over distance and time. The author will never win a Nobel prize in literature for it. Therefore it does not make sense to put the effort into an aesthetic, pleasurable text.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Specifications are for systems

&lt;ol&gt;
&lt;li&gt;System specifications describe what is required from the system. This should not at all prevent you from describing what the users do (in the sense of a business process description), but you want to do that in other parts of the documentation (use cases, for instance).&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Detailed written documentation has very low communication effectiveness

&lt;ol&gt;
&lt;li&gt;Therefore writing detailed functional requirements might not be the best of ideas. Consider NOT doing it if you expect a high rate of requirements creeps. Assume a change rate of 2-3% per month if you do not know the specific change rate of your project (which actually IS high, though widely experienced).&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is a well-structured explanation of a requirements format:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pz78jmxtw2uki01afre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pz78jmxtw2uki01afre.png" alt="How to write requirements" width="800" height="532"&gt;&lt;/a&gt;&lt;br&gt;
Large image: &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pz78jmxtw2uki01afre.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pz78jmxtw2uki01afre.png&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://web.archive.org/web/20210225012718/http://planetproject.wikidot.com/writing-atomic-functional-requirements"&gt;https://web.archive.org/web/20210225012718/http://planetproject.wikidot.com/writing-atomic-functional-requirements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://web.archive.org/web/20210321200514/http://www.agilemodeling.com/essays/communication.htm"&gt;https://web.archive.org/web/20210321200514/http://www.agilemodeling.com/essays/communication.htm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://web.archive.org/web/20210302042837/http://tynerblain.com/blog/2009/04/22/dont-use-shall/"&gt;https://web.archive.org/web/20210302042837/http://tynerblain.com/blog/2009/04/22/dont-use-shall/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  1. User's authentication
&lt;/h4&gt;

&lt;p&gt;The system must provide a user with the capability to request a student related information in free-form textual communication with the chatbot if the following condition is true:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the user was successfully logged in with the provided login password&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  2. Free-form request's bucketing
&lt;/h4&gt;

&lt;p&gt;The system must recognize to what predefined SQL query relates a free-form textual request about student information if all of the following conditions are true:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user sent the request through the chatbot&lt;/li&gt;
&lt;li&gt;The request semantically belongs to student information&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  3. Recognition failure
&lt;/h4&gt;

&lt;p&gt;The system must reply to a user with recognition failure if all of the following conditions are true:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user sent the request through the chatbot&lt;/li&gt;
&lt;li&gt;The request semantically doesn't belong to any predefined SQL queries that existed in the system&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  4. Correctness confirmation recognition
&lt;/h4&gt;

&lt;p&gt;The system must provide a user with the capability to confirm the correctness of predefined SQL query recognition if all of the following conditions are true:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user sent the request through the chatbot&lt;/li&gt;
&lt;li&gt;The request semantically belongs to a predefined SQL query that existed in the system&lt;/li&gt;
&lt;li&gt;The chatbot sent a confirmation question to the user, and the user confirmed that the free-form request belongs to the recognized SQL query&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  5. Additional parameters input
&lt;/h4&gt;

&lt;p&gt;The system must provide a user with the capability to input additional parameters for the recognized SQL query in a dialogue with the chatbot if the following condition is true:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Chatbot successfully recognized predefined SQL query based on a free-form textual request from the user&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  6. Predefined SQL query results
&lt;/h4&gt;

&lt;p&gt;The system must reply to a user with a result of a predefined SQL query invocation to Snowflake in tabular textual format in a chatbot dialogue if all of the following conditions are true:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user sent the request through the chatbot&lt;/li&gt;
&lt;li&gt;The request was recognized as belonging to one of the predefined SQL queries&lt;/li&gt;
&lt;li&gt;The predefined SQL query was sent to the Snowflake database&lt;/li&gt;
&lt;li&gt;Snowflake database responded with non-empty data&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Predefined queries to Snowflake database
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Student missed 3 or more periods in prior instructional week&lt;/li&gt;
&lt;li&gt;Student misses same day of the week multiple times&lt;/li&gt;
&lt;li&gt;Student Suspended&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How to read a component diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8aedj2vn4o2ttp52wtj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8aedj2vn4o2ttp52wtj.png" alt="UML components diagram: example" width="525" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For cognitive load reduction, a minimalist visual language is used to depict a static structure of deployable artifacts (called components) with links between them.&lt;/p&gt;

&lt;p&gt;Here is a subset of visual means of the standard UML component diagram, with only four elements: component, interface, and two types of connections: one for "provides" semantics and one for "uses" semantics.&lt;/p&gt;

&lt;p&gt;This type of diagram does not allow any visual means to be treated as flows. No arrows are permitted. A sequence diagram should depict time-related communications, protocols, and data flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chatbot AI Prototype components diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mprx3z9jm82rkihe9wc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mprx3z9jm82rkihe9wc.png" alt="UML components diagram: Chatbot AI Prototype" width="800" height="472"&gt;&lt;/a&gt;&lt;br&gt;
Large image: &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mprx3z9jm82rkihe9wc.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mprx3z9jm82rkihe9wc.png&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To simplify the prototype creation and ensure an efficient setup process, all the back-end components of the system will be deployed on a single machine using Docker Compose.&lt;/p&gt;

&lt;p&gt;Components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User's Browser&lt;/strong&gt;: The entry point for the user's interaction with the ChatBot SPA. It's where the user types their queries in free-form text.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx&lt;/strong&gt;: Acts as a reverse proxy, load balancer, and static asset server. It also handles basic authentication using login/password to ensure only authorized users can interact with the ChatBot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChatBot API&lt;/strong&gt;: The core component that processes user requests. It's a stateless backend service, possibly written in Python or NodeJS, that communicates with both the OpenAI API for natural language understanding and the Snowflake database for data retrieval. It also interacts with the Postgres database to fetch SQL query templates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Postgres&lt;/strong&gt;: Stores SQL query templates with placeholders. These templates are used by the ChatBot to construct SQL queries based on the user's request and additional parameters provided during the dialogue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI API&lt;/strong&gt;: Provides the natural language processing capabilities required to understand the user's free-form text requests and map them to the appropriate SQL query template from the Postgres database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snowflake&lt;/strong&gt;: The primary data storage where actual student attendance data is kept. Once the ChatBot constructs an SQL query, it's executed against this database to retrieve the requested information.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Chatbot AI Prototype process flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchxjvl5gegv4kbo4kwir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchxjvl5gegv4kbo4kwir.png" alt="UML sequence diagram: Chatbot AI Prototype" width="800" height="1115"&gt;&lt;/a&gt;&lt;br&gt;
Large image: &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chxjvl5gegv4kbo4kwir.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chxjvl5gegv4kbo4kwir.png&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The sequence diagram outlines the process flow of a user interacting with a ChatBot to retrieve data from a Snowflake database, using predefined SQL query templates stored in a Postgres database. The interaction involves several phases, from the initial request to the display of results.&lt;/p&gt;

&lt;p&gt;This sequence diagram does not contain a process of preparation and filling Postgres with predefined queries with templates with descriptions and embeddings.&lt;/p&gt;

&lt;p&gt;Here's a breakdown of each phase:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Initial Phase
&lt;/h4&gt;

&lt;p&gt;The user inputs the chatbot's web address into their browser. The browser requests the Single Page Application (SPA) for the chatbot from the Nginx server. Nginx returns the necessary SPA assets (HTML, JavaScript, CSS) to the browser. The user types a free-form textual message into the chatbot interface. The browser sends this message to the chatbot through Nginx, which forwards the authenticated request with the message to the ChatBot API.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Predefined Query Recognition Phase
&lt;/h4&gt;

&lt;p&gt;The ChatBot API requests an embedding vector for the message from the OpenAI API. The ChatBot API component asks Postgres for the closest predefined query description based on the user's message. After receiving the query description, the ChatBot API requests a dialogue descriptor from Postgres. The API constructs a prompt using the dialogue descriptor, predefined query description, and user's message. The prompt is sent to the OpenAI API to generate a next dialogue replica, which is used to confirm the correctness of the query recognition with the user.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Predefined Query Parameters Fill Phase
&lt;/h4&gt;

&lt;p&gt;Upon user confirmation, the ChatBot API fetches the SQL template for the recognized predefined query from Postgres. The API requests a specific dialogue descriptor related to the predefined query. Using this descriptor, the API builds another prompt and sends it to the OpenAI API to generate dialogue for requesting additional parameters from the user. The user supplies the additional parameters requested by the chatbot.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Predefined Query Invocation Phase
&lt;/h4&gt;

&lt;p&gt;The ChatBot API component constructs the final SQL query for invocation in Snowflake using the template filled with parameters provided by the user. This SQL query is executed against the Snowflake database. Snowflake returns the query results, which the ChatBot API formats and sends back to the browser to be displayed to the user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tasks breakdown
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Predefined Snowflake SQL queries preparation

&lt;ol&gt;
&lt;li&gt;Query templates composing.&lt;/li&gt;
&lt;li&gt;Embeddings generation.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Predefined Snowflake SQL descriptions preparation.&lt;/li&gt;
&lt;li&gt;Postgres DDL composing (&lt;a href="https://github.com/supabase-community/chatgpt-your-files/blob/main/supabase/migrations/20231006212813_documents.sql"&gt;example&lt;/a&gt;)

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;vector&lt;/code&gt; extension setup.&lt;/li&gt;
&lt;li&gt;DDL for &lt;code&gt;documents&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;DDL for &lt;code&gt;document_sections&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;DDL for &lt;code&gt;chatbot_dialogues&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;DDL for &lt;code&gt;llm_conversations&lt;/code&gt; table.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Docker-compose.yml composing.&lt;/li&gt;
&lt;li&gt;ChatBot API back-end

&lt;ol&gt;
&lt;li&gt;GitHub repository with CI/CD (GitHub Actions) preparation.&lt;/li&gt;
&lt;li&gt;Initial codebase composing.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;ChatBot SPA 

&lt;ol&gt;
&lt;li&gt;GitHub repository with CI/CD (GitHub Actions) preparation.&lt;/li&gt;
&lt;li&gt;Initial codebase composing.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Nginx configuration composing.&lt;/li&gt;
&lt;li&gt;Provisioning

&lt;ol&gt;
&lt;li&gt;GitHub repository with CI/CD (GitHub Actions) preparation.&lt;/li&gt;
&lt;li&gt;Terraform scripts composing.&lt;/li&gt;
&lt;li&gt;Hetzner VPS setup.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>architecture</category>
      <category>proposal</category>
      <category>chatbot</category>
      <category>llm</category>
    </item>
    <item>
      <title>💡 How to fail as a CTO: developers' fungibility</title>
      <dc:creator>Andrew Panfilov</dc:creator>
      <pubDate>Wed, 22 May 2024 11:07:00 +0000</pubDate>
      <link>https://dev.to/andrew_panfilov/how-to-fail-as-a-cto-developers-fungibility-3lmb</link>
      <guid>https://dev.to/andrew_panfilov/how-to-fail-as-a-cto-developers-fungibility-3lmb</guid>
      <description>&lt;p&gt;When individual developers can be substituted for one another without impacting productivity or project outcomes:&lt;/p&gt;

&lt;p&gt;✅ You have a homogeneous team culture; everyone works in the office and no hybrid/remote work&lt;br&gt;
✅ Your dev team uses mature mainstream technologies (top TIOBE languages like Python/TypeScript/Java, relational databases like Postgres for persistence, etc.)&lt;br&gt;
✅ Your teams' domains are pretty similar: developers working on one domain understand terms and acronyms from another domain&lt;br&gt;
✅ You have a well-structured hiring process, and you are sure of the cultural fit&lt;br&gt;
✅ You have a low churn rate: institutional knowledge is high within the engineering organization&lt;br&gt;
✅ All of your engineers are experienced ones: seniors and higher&lt;br&gt;
✅ You have a decent test coverage for your codebase&lt;br&gt;
✅ You are sure that you have the ubiquitous language in the organization: your QA folks understand business folks and vice versa if needed&lt;/p&gt;

&lt;p&gt;When can not be interchanged:&lt;/p&gt;

&lt;p&gt;⛔️ You are a highly dynamic startup before product market fit&lt;br&gt;
⛔️ Your organization is growing in headcount&lt;br&gt;
⛔️ You have more than two programming languages for the backend, more than two for frontend, you use a young (less than 16 years old) non-SQL database or several different ones&lt;br&gt;
⛔️ Your engineers follow different schools of thought: some people like objected-oriented, some functional programming&lt;br&gt;
⛔️ You have vague test coverage and are not sure about the quality gates&lt;br&gt;
⛔️ You have a geographically distributed team, hybrid or remote mode&lt;br&gt;
⛔️ You have developers in the team who have no experience with your current programming language nevertheless of seniority &lt;br&gt;
⛔️ You have two types of developers: core and provided by outsourcing or outstaffing service&lt;br&gt;
⛔️ Your business works with a domain with high essential complexity&lt;/p&gt;

</description>
      <category>startup</category>
      <category>cto</category>
      <category>fail</category>
      <category>fungibility</category>
    </item>
    <item>
      <title>Similarities of a startup and museum of art. The role of an architect.</title>
      <dc:creator>Andrew Panfilov</dc:creator>
      <pubDate>Fri, 03 May 2024 12:17:35 +0000</pubDate>
      <link>https://dev.to/andrew_panfilov/similarities-of-a-startup-and-museum-of-art-the-role-of-an-architect-ibm</link>
      <guid>https://dev.to/andrew_panfilov/similarities-of-a-startup-and-museum-of-art-the-role-of-an-architect-ibm</guid>
      <description>&lt;p&gt;This post was originally published here: &lt;a href="https://www.linkedin.com/pulse/similarities-startup-museum-art-role-architect-andrew-panfilov-fhv0f/"&gt;https://www.linkedin.com/pulse/similarities-startup-museum-art-role-architect-andrew-panfilov-fhv0f/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a museum of the art context, comparing technical roles to positions within the museum can help illustrate each role's responsibilities and impact within their respective environments.&lt;/p&gt;

&lt;p&gt;🖼️ Museum Director -&amp;gt; 💻 Chief Technology Officer (CTO):&lt;/p&gt;

&lt;p&gt;The museum director oversees all operations, strategy, and long-term vision, ensuring it fulfills its mission, engages the public, and operates efficiently. Similarly, the CTO in a tech company is responsible for the company's technological direction and innovation and ensuring the technology strategy aligns with the company's goals. They oversee the development and dissemination of technology to enhance product offerings and improve operational efficiency, much like the museum director ensures the museum's offerings (exhibitions, collections) are relevant, engaging, and accessible.&lt;/p&gt;

&lt;p&gt;🖼️ Exhibition Curator -&amp;gt; 💻 Software Architect:&lt;/p&gt;

&lt;p&gt;An exhibition curator is responsible for selecting and organizing artworks to create exhibitions that engage visitors, tell a story, or explore specific themes or historical periods. This role involves research, understanding the audience, and designing the layout of exhibitions to guide visitors' experiences. The software architect, similarly, designs the structure of software systems, choosing the right technologies and patterns to ensure the software meets requirements and is scalable, maintainable, and efficient. Both roles require a deep understanding of their field, creativity in presentation or design, and the ability to envision and execute a coherent and engaging experience.&lt;/p&gt;

&lt;p&gt;🖼️ Artist -&amp;gt; 💻 Development Team:&lt;/p&gt;

&lt;p&gt;Artists create artworks that are the core of any museum's mission, using their skills, creativity, and vision to produce pieces that engage, challenge, and delight audiences. In the tech world, the development team, comprising software developers, engineers, and programmers, is responsible for writing the code that makes software products and services functional. They bring the vision of the product owners, CTOs, and software architects to life through technical implementation, much like artists realize their creative visions through their chosen mediums. Both contribute the essential creative and technical skills necessary for the success of their respective institutions or projects.&lt;/p&gt;

&lt;p&gt;These analogies help understand each member's pivotal roles in their respective domains, highlighting the importance of leadership, vision, creativity, and execution in achieving successful outcomes. &lt;/p&gt;

</description>
      <category>startup</category>
      <category>cto</category>
      <category>architect</category>
      <category>roles</category>
    </item>
    <item>
      <title>💡How to fail as a CTO: never check your hiring managers</title>
      <dc:creator>Andrew Panfilov</dc:creator>
      <pubDate>Fri, 03 May 2024 11:34:05 +0000</pubDate>
      <link>https://dev.to/andrew_panfilov/how-to-fail-as-a-cto-never-check-your-hiring-managers-1971</link>
      <guid>https://dev.to/andrew_panfilov/how-to-fail-as-a-cto-never-check-your-hiring-managers-1971</guid>
      <description>&lt;p&gt;This post was originally published here &lt;a href="https://www.linkedin.com/pulse/how-fail-cto-never-check-your-hiring-managers-andrew-panfilov-slhff/"&gt;https://www.linkedin.com/pulse/how-fail-cto-never-check-your-hiring-managers-andrew-panfilov-slhff/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To be a successful CTO, it is crucial to prioritize hiring and overseeing the hiring process. Neglecting this responsibility can pose a significant risk to the company.&lt;/p&gt;

&lt;p&gt;As a confident solution, I recommend that CTOs take a more active role in the hiring process to ensure that candidates are evaluated based on their skills and experience rather than their ability to memorize a book. Today's hiring process prioritizes form over substance due to a lack of attention from CTOs towards the hiring process. Candidates for an engineering manager position are often required to memorise a book such as 'The Software Engineering Manager Interview Guide', which provides hiring managers with expected answers. The organization has appointed an engineering manager who possesses the correct answers knowledge but lacks the other two crucial components of the triad: knowledge-ability-skill.&lt;/p&gt;

&lt;p&gt;The issue of hiring for technical positions is plagued by a common problem. A system design interview is a standard part of the hiring process for any technical position in today's job market. However, it can be problematic if the hiring manager expects a response that aligns with the book let’s say with title 'System Design Interview'. A candidate's response to a question such as 'design a payment system' that differs from what is written in the book does not necessarily make the engineer unsuitable. Hiring managers should consider the candidate's reasoning and approach before making a decision. It is important to note that not all payment systems in the world are designed as described in that book, and there are many valid reasons for this. Hiring someone who has memorized answers from books or blog posts but lacks an engineering perspective and the skills to build systems poses a significant risk.&lt;/p&gt;

&lt;p&gt;Let's consider an example that commonly occurs during the hiring process. When a candidate is interviewing for an architect position, they may be asked questions as if they were interviewing for a delivery manager position. Similarly, if a candidate is interviewing for an engineering manager position, they may be asked questions as if they were interviewing for a staff engineer or architect position. This gives the impression that companies do not fully understand the differences between these roles. When people within these companies, such as hiring managers, create job descriptions, they may not fully comprehend the specific requirements of each role, such as the differences between a software architect and an engineering manager. Instead, they may look at job descriptions from competitors and attempt to create something similar for a position in their own company without accurately describing the unique problems that the company is facing and the responsibilities that the hired candidate will need to undertake.&lt;/p&gt;

&lt;p&gt;An architect's primary role is technical. They create diagrams representing the system's structure, including the services' communication, order, and purpose. On the other hand, an engineering manager's position involves people management, such as hiring, performance reviews, conflict resolution, and promoting work-life balance.&lt;/p&gt;

&lt;p&gt;It is important to note that while an architect can mentor development teams, it is not the same as people management. Having an architect present during sprint planning and demos is crucial to prevent duplication of work between teams. The architect acts as a facilitator between development teams, business representatives, management, and developers and communicates with both parties. Moreover, the architect should work closely with the product manager to participate in quarterly planning and communicate the team's current technical debt situation. The architect will discuss the team's capacity for addressing technical debt with the product manager.&lt;/p&gt;

&lt;p&gt;The architect establishes a process for making architectural decisions within and across the organization. The development team receives a set of non-functional requirements from the architect. The development team submits architectural proposal documents to the architect or architecture team. The team outlines their vision for the current and proposed system changes in these documents. Additionally, they explain the motivation for adding a new Deployment Artifact or service and how it will communicate with the existing system, such as through REST API, PubSub Messaging, a shared file system, or a shared database. The architect must communicate the Non-Functional Requirements (NFRs) to the product manager and the product vertical within the organization. It is crucial to explain why these requirements are essential and the potential business risks that may arise if they are unmet.&lt;/p&gt;

&lt;p&gt;The role of an engineering manager is to focus on people management. This involves having one-on-one meetings with each team member to give and receive feedback. The aim is to ensure everyone has a comfortable and efficient work environment. This helps identify and address issues early on before they become significant problems. The engineering manager should also maintain a good delivery pace while ensuring that employees are productive without burning out. Why is this important? If employees are hired correctly and happy in their roles, they bring business value and reduce the churn rate. This means that employees resign less frequently, institutional knowledge within teams grows, and programmers better understand the business beyond just the technological aspect of the product. This diversity of knowledge and understanding can provide unique insights into problems and potential solutions that product managers may not immediately realize.&lt;/p&gt;

&lt;p&gt;The hiring process varies depending on the organization. Some companies prioritize avoiding hiring the wrong person, considering it more crucial than hiring the right person. Others prioritize hiring the right person, even taking on the risk of hiring someone who might not work out. Regardless, the CTO is responsible for implementing the hiring strategy and ensuring that hiring managers follow it and do not violate it. &lt;/p&gt;

</description>
      <category>hiring</category>
      <category>cto</category>
      <category>startup</category>
      <category>fail</category>
    </item>
    <item>
      <title>Hiring for unknown future</title>
      <dc:creator>Andrew Panfilov</dc:creator>
      <pubDate>Thu, 02 May 2024 17:49:32 +0000</pubDate>
      <link>https://dev.to/andrew_panfilov/hiring-for-unknown-future-11h4</link>
      <guid>https://dev.to/andrew_panfilov/hiring-for-unknown-future-11h4</guid>
      <description>&lt;p&gt;Originally, this post was published here: &lt;a href="https://www.linkedin.com/pulse/hiring-unknown-future-andrew-panfilov-p5hff/"&gt;https://www.linkedin.com/pulse/hiring-unknown-future-andrew-panfilov-p5hff/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hiring for a startup presents unique challenges compared to hiring for an established company. Established companies generally have stable operations, products, organizational structures, and cultures. In contrast, startups are dynamic, constantly evolving entities. The organizational structure that was effective six months ago in a startup might already be outdated, necessitating continual adaptation and reorganization. This dynamic nature adds complexity to the hiring process.&lt;/p&gt;

&lt;p&gt;A startup's future is often unpredictable and filled with "unknown unknowns." Specialists hired for specific roles may find their skills obsolete or less relevant by the time they are onboarded and ready to contribute due to the fast-paced changes in the startup's needs.&lt;/p&gt;

&lt;p&gt;Hiring individuals with previous startup experience can be beneficial. They are likely to be adaptable and experienced in navigating rapidly changing environments. Similarly, diversity in the team can be an advantage, bringing a range of experiences and perspectives that might be critical in unforeseen situations.&lt;/p&gt;

&lt;p&gt;Inexperienced but high-potential employees can also be valuable in a startup. Their lack of preconceived notions about what is possible or impossible can lead to innovative solutions to problems that more experienced professionals might find daunting or unsolvable.&lt;/p&gt;

&lt;p&gt;A critical aspect of hiring in a startup is ensuring the alignment of values between the founders and potential employees. Skills and knowledge are essential, but if an employee's values don't align with those of the founders, it can lead to significant problems, outweighing the employee's benefits. As such, founders should be deeply involved in hiring, especially in the early stages. This involvement is crucial to maintaining the startup's culture and values, as each management level tends to hire individuals similar to themselves. Misalignment at the higher levels (like C-Suite or VP-Level) can propagate through the organization, potentially leading to internal conflicts that could jeopardize the startup.&lt;/p&gt;

&lt;p&gt;For founders, discerning a candidate's calibre is paramount. In a startup's early stages, hiring high-calibre individuals is incredibly crucial. While luck plays a role in success, it cannot be planned or programmed.&lt;/p&gt;

&lt;p&gt;Fortunately, the current job market favours employers. A large pool of talented and suitable candidates is available, unlike a few years ago when the market was more candidate-driven. &lt;/p&gt;

</description>
      <category>startup</category>
      <category>hiring</category>
      <category>cto</category>
    </item>
    <item>
      <title>LLM pipeline for marketing research insights</title>
      <dc:creator>Andrew Panfilov</dc:creator>
      <pubDate>Tue, 30 Apr 2024 11:08:10 +0000</pubDate>
      <link>https://dev.to/andrew_panfilov/llm-pipeline-for-marketing-research-insights-5bfc</link>
      <guid>https://dev.to/andrew_panfilov/llm-pipeline-for-marketing-research-insights-5bfc</guid>
      <description>&lt;p&gt;Originally this article was published here &lt;a href="https://www.linkedin.com/pulse/llm-pipeline-marketing-research-insights-andrew-panfilov-gt1bf/"&gt;https://www.linkedin.com/pulse/llm-pipeline-marketing-research-insights-andrew-panfilov-gt1bf/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The past year marked a significant evolution in the marketing research landscape. The advent of widely available consumer-grade Large Language Models (LLMs) has transformed the traditional dichotomy between quantitative and qualitative research methods into a more integrated approach. This new paradigm leverages chatbots to interact with participants, eliciting insights about, for example, their brand preferences. This shift has led to a more nuanced type of research, blurring the lines between quantitative and qualitative methodologies.&lt;/p&gt;

&lt;p&gt;High-level representation of Conversational AI pipeline for marketing research:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma3bdrlqee7pm08jq3ur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma3bdrlqee7pm08jq3ur.png" alt="High-level representation of Conversational AI pipeline for marketing research" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This innovative approach can be conceptualized as a four-phase process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Chatbot AI and Survey Integration: This initial phase involves setting up an AI-driven chatbot conversation, sometimes integrated with a traditional survey framework.

&lt;ul&gt;
&lt;li&gt;Transition to Phase 2: Once the conversation flow is thoroughly tested, it's published to become accessible to respondents.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Engagement through Conversations: In this phase, the chatbot engages in conversations with respondents.

&lt;ul&gt;
&lt;li&gt;Transition to Phase 3: The conversation sessions are concluded once the required respondent data is gathered.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Data Processing and LLM Analysis: This critical stage involves cleansing and analyzing the collected data using LLMs.

&lt;ul&gt;
&lt;li&gt;Transition to Phase 4: This phase concludes once the data analysis is complete.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Visual Reporting: The final phase focuses on creating visual reports that effectively communicate the insights derived from the analysis.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the first stage, researchers develop a finite-state machine to guide the chatbot's conversations. This involves creating a series of prompts, including statements, questions, and follow-up questions used by the LLM during interactions with respondents. In this phase, the researcher employs a method akin to the Read-Eval-Print Loop (REPL). It enables the researcher to observe and assess the chatbot's behaviour in real-time during a conversation, following any modifications made to the dialogue descriptors. This stage is crucial for quality assurance, ensuring that the chatbot's conversations are relevant and error-free. Additionally, the LLM can be utilized for tasks like translating content into various languages.&lt;/p&gt;

&lt;p&gt;Respondents engage with the chatbot during the second phase, with each conversation driven by the finite state machine established in the first stage. The dialogue elements are fed into the LLM to generate subsequent statements or questions. A key requirement here is a responsive LLM with low latency (ideally in seconds or less) to prevent long waits for the chatbot's responses. While the OpenAI API may occasionally encounter issues such as 502 and 503 HTTP status codes or connection timeouts, its practical value in production for marketing research remains significant, especially considering that there are no costs associated with failed respondent interactions.&lt;/p&gt;

&lt;p&gt;The third stage involves processing the collected chatbot-respondent dialogues with the LLM, following the removal of Personally Identifiable Information (PII). This processing is done in batches to avoid exceeding the LLM's token limits. The LLM's role here is to perform tasks such as categorization, tagging, entity extraction, and sentiment analysis, all essential for deriving meaningful insights from the data. This analysis can be time-consuming, requiring tens of minutes to hours. The system is designed to distribute the dialogues to prevent overloading the LLM API evenly and includes a retry mechanism to handle potential API issues like 5XX HTTP status codes or timeouts. The LLM is also may be employed for validating the results of data aggregation.&lt;/p&gt;

&lt;p&gt;In the final stage, the LLM is not utilized. Instead, the aggregated data, now representing valuable marketing insights, is presented in an easily understandable format on a dashboard. This stage focuses on visualizing the insights and offers options to export the data as PDF or PowerPoint reports, facilitating further analysis and presentation.&lt;/p&gt;

&lt;p&gt;Components diagram for Phase 1, 3, 4:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9a9pk1ryixy954huomlt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9a9pk1ryixy954huomlt.png" alt="Components diagram for Phase 1, 3, 4" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Components diagram for Phase 2:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxvio4z8cmgea1asz3wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxvio4z8cmgea1asz3wx.png" alt="Components diagram for Phase 2" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagrams illustrate that the LLM is utilized as an opaque entity accessed through an API. In practical applications, especially for marketing research, there is no requirement for fine-tuning the LLM. Currently, gpt-3.5-turbo is the preferred option due to its optimal combination of cost-effectiveness and features. It's important to note that the system represented in these diagrams is a simplified version for the sake of clarity. A real-world implementation would include additional deployable artifacts and specific characteristics. Despite these simplifications, the basic concept remains easily comprehensible.&lt;/p&gt;

</description>
      <category>marketingresearch</category>
      <category>llm</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>Need microservice? Take Clojure!</title>
      <dc:creator>Andrew Panfilov</dc:creator>
      <pubDate>Fri, 26 Apr 2024 12:49:44 +0000</pubDate>
      <link>https://dev.to/andrew_panfilov/lets-write-a-simple-microservice-in-clojure-237a</link>
      <guid>https://dev.to/andrew_panfilov/lets-write-a-simple-microservice-in-clojure-237a</guid>
      <description>&lt;p&gt;Initially, this post was published here: &lt;a href="https://www.linkedin.com/pulse/lets-write-simple-microservice-clojure-andrew-panfilov-2ghqe/"&gt;https://www.linkedin.com/pulse/lets-write-simple-microservice-clojure-andrew-panfilov-2ghqe/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;This article will explain how to write a simple service in &lt;a href="https://clojure.org/"&gt;Clojure&lt;/a&gt;. The sweet spot of making applications in Clojure is that you can expressively use an entire rich Java ecosystem. Less code, less boilerplate: it is possible to achieve more with less. In this example, I use most of the libraries from the Java world; everything else is a thin Clojure wrapper around Java libraries.&lt;/p&gt;

&lt;p&gt;From a business logic standpoint, the microservice calculates math expressions and stores the history of such calculations in the database (there are two HTTP endpoints for that).&lt;/p&gt;

&lt;p&gt;Github repository with source code: &lt;a href="https://github.com/dzer6/calc"&gt;https://github.com/dzer6/calc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This educational microservice project will provide the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Swagger descriptor for REST API with nice &lt;a href="https://swagger.io/tools/swagger-ui/"&gt;Swagger UI console&lt;/a&gt;. Nowadays, it is a standard de facto. Microservices should be accessible via HTTP and operate with data in a human-readable JSON format. As a bonus, it is super easy to &lt;a href="https://swagger.io/tools/swagger-codegen/"&gt;generate&lt;/a&gt; data types and API client code for the client side (it works well for a TypeScript-based front-end, for example).&lt;/li&gt;
&lt;li&gt;Postgres-based persistence with a pretty straightforward mapping of SQL queries to Clojure functions. If you have ever used Java with &lt;a href="https://hibernate.org/orm/"&gt;Hibernate ORM&lt;/a&gt; for data persistence, you will feel relief after working with the database in Clojure with &lt;a href="https://www.hugsql.org/"&gt;Hugsql&lt;/a&gt;. The model of the persistence layer is much simpler and easier to understand without the need for Session Cache, Application Level Cache and Query Cache. Debugging is straightforward, as opposed to the nightmare of debugging asynchronous actual SQL invocation that is never in the expected location. It is such an incredible experience to see the query invocation result as just a sequence of plain Clojure maps instead of a bag of Java entity proxies.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://clojure.org/guides/repl/introduction"&gt;REPL&lt;/a&gt;-friendly development setup. &lt;a href="https://github.blog/2023-06-08-developer-experience-what-is-it-and-why-should-you-care/"&gt;DX&lt;/a&gt; (dev experience) might not be the best in class, but it is definitely not bad. Whenever you want to change or add something to the codebase, you start a REPL session in an IDE (in my case, &lt;a href="https://cursive-ide.com/"&gt;Cursive&lt;/a&gt; / IntelliJ Idea). You can run code snippets to print their results, change the codebase, and reload the application. In addition, you can selectively run needed tests. You do not need to restart the JVM instance every time after the codebase changes (JVM is famous for its slow start time). Using the mount library, all stateful resources shut down and initialize correctly every reload.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Leiningen
&lt;/h1&gt;

&lt;p&gt;The &lt;a href="https://github.com/dzer6/calc/blob/main/project.clj"&gt;project.clj&lt;/a&gt; file is a configuration file for &lt;a href="https://leiningen.org/"&gt;Leiningen&lt;/a&gt;, a build automation and dependency management tool for Clojure. It specifies the project's metadata, dependencies, paths, and other settings necessary for building the project. Let's break down the libraries listed in the &lt;code&gt;project.clj&lt;/code&gt; file into two groups: pure Java libraries and Clojure libraries, and describe each.&lt;/p&gt;

&lt;p&gt;Clojure Libraries:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;org.clojure/clojure&lt;/code&gt;: The Clojure language itself.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;org.clojure/core.memoize&lt;/code&gt;: Provides memoization capabilities to cache the results of expensive functions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;org.clojure/tools.logging&lt;/code&gt;: A simple logging abstraction that allows different logging implementations.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mount&lt;/code&gt;: A library for managing state in Clojure applications.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;camel-snake-kebab&lt;/code&gt;: A library for converting strings (and keywords) between different case formats.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prismatic/schema&lt;/code&gt;: A library for structuring and validating Clojure data.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metosin/schema-tools&lt;/code&gt;: Utilities for Prismatic Schema.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;clj-time&lt;/code&gt;: A date and time library for Clojure.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;clj-fuzzy&lt;/code&gt;: A library for fuzzy matching and string comparison.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;slingshot&lt;/code&gt;: Provides enhanced try/catch capabilities in Clojure.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ring&lt;/code&gt;: A Clojure web applications library.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metosin/compojure-api&lt;/code&gt;: A library for building REST APIs with Swagger support.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cprop&lt;/code&gt;: A configuration library for Clojure.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;com.taoensso/encore&lt;/code&gt;: A utility library providing additional Clojure and Java interop facilities.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;com.zaxxer/HikariCP&lt;/code&gt;: A high-performance JDBC connection pooling library.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;com.github.seancorfield/next.jdbc&lt;/code&gt;: A modern, idiomatic JDBC library for Clojure.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;com.layerware/hugsql-core&lt;/code&gt;: A library for defining SQL in Clojure applications.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metosin/jsonista&lt;/code&gt;: A fast JSON encoding and decoding library for Clojure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Pure Java Libraries:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;ch.qos.logback&lt;/code&gt;: A logging framework.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;org.codehaus.janino&lt;/code&gt;: A compiler that reads Java expressions, blocks, or source files, and produces Java bytecode.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;org.slf4j&lt;/code&gt;: A simple logging facade for Java.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;org.postgresql/postgresql&lt;/code&gt;: The JDBC driver for PostgreSQL.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;org.flywaydb&lt;/code&gt;: Database migration tool.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;com.fasterxml.jackson.core&lt;/code&gt;: Libraries for processing JSON.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;org.mvel/mvel2&lt;/code&gt;: MVFLEX Expression Language (MVEL) is a hybrid dynamic/statically typed, embeddable Expression Language and runtime.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To build the project, just run it in a terminal: &lt;code&gt;lein uberjar&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The path to a resulting &lt;a href="https://dzone.com/articles/the-skinny-on-fat-thin-hollow-and-uber"&gt;fat-jar&lt;/a&gt; with all needed dependencies: &lt;code&gt;target/app.jar&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Frameworks VS Libraries
&lt;/h1&gt;

&lt;p&gt;In the Java world, one common approach is to use full-fledged frameworks that provide comprehensive solutions for various aspects of software development. These frameworks often come with a wide range of features and functionalities built-in, aiming to simplify the development process by providing pre-defined structures and conventions. Examples of such frameworks include the &lt;code&gt;Spring Framework&lt;/code&gt;, &lt;code&gt;Java EE&lt;/code&gt; (now &lt;code&gt;Jakarta EE&lt;/code&gt;), and &lt;code&gt;Hibernate&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;On the other hand, in the Clojure world, the approach tends to favour using small, composable libraries rather than monolithic frameworks. Clojure promotes simplicity and flexibility, encouraging developers to choose and combine libraries that best fit their needs. These libraries typically focus on solving one problem well, making them lightweight and easy to understand. Examples of popular Clojure libraries include Ring for web development, Compojure for routing, and Spec for data validation.&lt;/p&gt;

&lt;p&gt;The difference between these approaches lies in their philosophies and design principles. Full bloated frameworks in the Java world offer convenience and a one-size-fits-all solution but may come with overhead and complexity. In contrast, small libraries in the Clojure world emphasize simplicity, modularity, and flexibility, allowing developers to build tailored solutions while keeping the codebase lightweight and maintainable.&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2ci9wcggap33bhhse16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2ci9wcggap33bhhse16.png" alt="If you do not intend to run the microservice locally on a laptop only, you will probably use containerization, and Docker is today the standard de facto for this." width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/dzer6/calc/blob/main/Dockerfile"&gt;Dockerfile&lt;/a&gt; sets up a containerized environment for the application, leveraging &lt;a href="https://aws.amazon.com/corretto/faqs/"&gt;Amazon Corretto 22&lt;/a&gt; on Alpine Linux. It downloads the AWS OpenTelemetry &lt;a href="https://opentelemetry.io/docs/collector/deployment/agent/"&gt;Agent&lt;/a&gt; (you can use the standard one if you don't need &lt;a href="https://github.com/aws-observability/aws-otel-java-instrumentation"&gt;AWS-related&lt;/a&gt;) to enable observability features, including distributed tracing, and then copies the application JAR file into the container. Environment variables are configured to include the Java agent for instrumentation and allocate 90% of available RAM (which is useful for a container-based setup). Finally, it exposes port 8080 and specifies the command to start the Java application server.&lt;/p&gt;

&lt;h1&gt;
  
  
  Dev Experience
&lt;/h1&gt;

&lt;h2&gt;
  
  
  REPL
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop"&gt;Read-Eval-Print Loop&lt;/a&gt; in Clojure is a highly effective tool for interactive development, which allows developers to work more efficiently by providing immediate feedback. Unlike traditional compile-run-debug cycles, the REPL enables developers to evaluate expressions and functions on the fly, experiment with code snippets, and inspect data structures in real time. This makes the development process more dynamic and exploratory, leading to a deeper understanding of the codebase. Additionally, the REPL's seamless integration with the language's functional programming paradigm empowers developers to embrace Clojure's expressive syntax and leverage its powerful features, ultimately enhancing productivity and enabling rapid prototyping and iterative development cycles. REPL is a bee's knees, in other words.&lt;/p&gt;

&lt;p&gt;First you start REPL-session: &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4nykt2nefvmt6l3l03j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4nykt2nefvmt6l3l03j.png" alt="REPL is started and ready for code evaluation" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next you type (init) to invoke initialization function and press Enter – application will start and you will see something similar to:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsw8ld2tccu4q521azozb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsw8ld2tccu4q521azozb.png" alt=":done means that the service is up and running" width="800" height="1115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The session logs show that the application loads configurations and establishes a connection with a PostgreSQL database. This involves initializing a &lt;a href="https://www.baeldung.com/hikaricp"&gt;HikariCP&lt;/a&gt; connection pool and &lt;a href="https://flywaydb.org/"&gt;Flyway&lt;/a&gt; for database migrations. The logs confirm that the database schema validation and migration checks were successful. The startup of the &lt;a href="https://eclipse.dev/jetty/"&gt;Jetty HTTP server&lt;/a&gt; follows, and the server becomes operational and ready to accept requests on the specified port.&lt;/p&gt;

&lt;p&gt;After any code change to apply it, you should type (reset) and press Enter.&lt;/p&gt;

&lt;p&gt;To run tests, you should type (run-tests) and press Enter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Compose
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxkpgt6whdm6unrehmni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxkpgt6whdm6unrehmni.png" alt='This approach ensures that all team members work in identical settings, thus mitigating the "it works on my machine" problem.' width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt; to run Postgres and any third-party services locally provides a streamlined and consistent development environment. Developers can define services in a &lt;a href="https://github.com/dzer6/calc/blob/main/docker-compose.yml"&gt;docker-compose.yml&lt;/a&gt; file, which enables them to configure and launch an entire stack with a single command. In this case, Postgres is encapsulated within a container with predefined configurations. Docker Compose also facilitates easy scaling, updates, and isolation of services, enhancing development efficiency and reducing the setup time for new team members or transitioning between projects. It encapsulates complex configurations, such as Postgres' performance monitoring and logging settings, in a manageable, version-controlled file, simplifying and replicating the service setup across different environments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Stateful Resources
&lt;/h1&gt;

&lt;p&gt;The &lt;code&gt;mount&lt;/code&gt; Clojure library is a lightweight and idiomatic solution for managing application state in Clojure applications. It offers a more straightforward and functional approach than the Spring Framework, which can be more prescriptive and heavy. Mount emphasizes simplicity, making it an excellent fit for the functional programming paradigm without requiring extensive configuration or boilerplate code. This aligns well with Clojure's philosophy, resulting in a more seamless and efficient development experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floa7f15t75ynz7kuhzlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floa7f15t75ynz7kuhzlv.png" alt="Example of managing database connection stateful resource." width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Only two functions: for start and stop.&lt;/p&gt;

&lt;h1&gt;
  
  
  REST API
&lt;/h1&gt;

&lt;p&gt;Compojure's &lt;a href="https://github.com/simongray/clojure-dsl-resources"&gt;DSL&lt;/a&gt; for web applications makes it easy to set up REST API routes with corresponding HTTP methods. Adding a Swagger API descriptor through libraries like ring-swagger provides a visual interface for interacting with the API and enables client code generation. You can use the Prismatic schema library for HTTP request validation and data coercing to ensure the API consumes and produces data that conforms to predefined schemas. Compojure's middleware approach allows for modular and reusable components that can handle cross-cutting concerns like authentication, logging, and request/response transformations, enhancing the API's scalability and maintainability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbv2ieksnf56jhwmci7zw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbv2ieksnf56jhwmci7zw.png" alt="Declarative concise DSL for REST API." width="800" height="815"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The middleware chain is set up in HTTP server-related namespace:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0wdygzxd2nzfov90t90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0wdygzxd2nzfov90t90.png" alt="HTTP request middleware chain is a powerful yet dangerous tool – be careful when changing." width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers and QA engineers find Swagger UI console highly convenient. I encourage you to run the service locally and try the console in a browser. Here is a list of HTTP endpoints with data schemas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhon5lq0995l679nrjicz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhon5lq0995l679nrjicz.png" alt="All information about the service' REST API in one place!" width="800" height="1060"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Isn't it awesome?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zhwswrkx52kahz28fvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zhwswrkx52kahz28fvb.png" alt="Endpoint documentation, request-response data schemas and even cURL command ready to use in the terminal!" width="800" height="1060"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Business Logic
&lt;/h1&gt;

&lt;p&gt;The &lt;code&gt;calc.rpc.controller.calculation&lt;/code&gt; &lt;a href="https://github.com/dzer6/calc/blob/main/src/main/clj/calc/rpc/controller/calculation.clj"&gt;controller&lt;/a&gt; houses the business logic that defines two primary operations: evaluate and obtain-past-evaluations.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;evaluate&lt;/code&gt; operation processes and evaluates mathematical expressions received as requests, storing the results in a database:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52jvab6bz5b0ydyufdwh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52jvab6bz5b0ydyufdwh.png" alt="Only successful calculations will be stored in the database." width="800" height="615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;obtain-past-evaluations&lt;/code&gt; operation fetches a list of previously executed calculations based on provided offset and limit parameters:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vty1z1sd5u5o80twipf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vty1z1sd5u5o80twipf.png" alt="This operation does not contain request data schema as it is exposed as a GET HTTP endpoint." width="800" height="767"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ensuring that exceptions or database inconsistencies are handled gracefully is crucial for the successful execution of these operations.&lt;/p&gt;

&lt;p&gt;The integration of external libraries, MVEL (MVFLEX Expression Language) for expression evaluation, and JDBC for database transactions highlights Clojure's interoperability with Java.&lt;/p&gt;

&lt;p&gt;Another essential principle demonstrated by using the MVEL library is never to write your implementation of something already written in Java in Clojure. Most of your business cases are already covered by some Java library written, stabilized, and optimized years ago. You should have strong reasons to write something from scratch in Clojure instead of using a Java analog.&lt;/p&gt;

&lt;h1&gt;
  
  
  Persistence Layer
&lt;/h1&gt;

&lt;p&gt;Thanks to the &lt;code&gt;hugsql&lt;/code&gt; library, we can use autogenerated Clojure functions directly mapped to SQL queries described in a plain text file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqz0nsv4ntj2k06poeus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqz0nsv4ntj2k06poeus.png" alt="Hugsql library uses" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As Clojure is not an object-oriented language, we don't need to specially map query result sets coming from a relational database to a collection of objects in a programming language. No OOP, no ORM. Very convenient. The relational algebra paradigm seamlessly marries with a functional paradigm in Clojure. Very natural:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0hd780zm4gr2h66b105.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0hd780zm4gr2h66b105.png" alt="Remember  raw `-- :name find-expressions :query :many` endraw  in queries.sql file? It renders as  raw `query/find-expressions` endraw  Clojure function." width="800" height="886"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Compared to NoSQL databases, migrating the data schema in relational databases such as Postgres is a well-established practice. This is typically done through migrations, which is made easy by using the flyway library. To adjust the data schema in Postgres, we simply need to create a new text file containing the Data Definition Language (DDL) commands. In our case there is only one migration &lt;a href="https://github.com/dzer6/calc/blob/main/src/main/resources/db/migration/V1__Init_migrations.sql"&gt;file&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flapomf4v48phx7bc3aqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flapomf4v48phx7bc3aqt.png" alt="The beauty of the declarative nature of relational DDL." width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whenever you change an SQL query in the queries.sql file, do not forget to run the (reset) function in the REPL-session console. It automatically regenerates the Clojure &lt;a href="https://github.com/dzer6/calc/blob/main/src/main/clj/calc/db/query.clj"&gt;namespace&lt;/a&gt; with query declarations and runtime-generated SQL wrapper functions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Configuration
&lt;/h1&gt;

&lt;p&gt;The system uses the Clojure library cprop to manage its configuration. The library adopts a sequential merge policy to construct the application's configuration map. It starts by loading default-config.edn from resources and overlays it with local-config.edn if available. Then, it applies settings from an external config.edn and overrides by environment variables (adhering to the 12-factor app guidelines). This ensures that the latest source has precedence.&lt;/p&gt;

&lt;p&gt;The configuration is essential during development and is a Clojure map validated against a Prismatic schema. If discrepancies are detected, the system immediately shuts down, adhering to the fail-fast principle. &lt;/p&gt;

&lt;p&gt;Additionally, feature flags within the configuration enable selective feature toggling, aiding in the phased introduction of new functionality and ensuring robustness in production environments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Logging
&lt;/h1&gt;

&lt;p&gt;The service utilizes &lt;code&gt;org.clojure/tools.logging&lt;/code&gt; to offer a logging API at a high level, which works in conjunction with Logback and Slf4j—two Java libraries that are well-known for their reliability in logging. The logging setup is customized for the application's environment: while in development, logs are produced in a plain text format that is easy to read, allowing for efficient debugging. On the other hand, when the service is deployed on servers, logs are structured in a JSON format, which makes them ideal for machine parsing and analysis, optimizing their performance in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8udwuw3wdpnagtdovh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8udwuw3wdpnagtdovh5.png" alt="Old good XML." width="800" height="818"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Tests
&lt;/h1&gt;

&lt;p&gt;This is a real-world industrial example. Yes, we do have tests. Not many. But for this size codebase is pretty much okay.&lt;/p&gt;

&lt;p&gt;Unfortunately, most open-source Clojure-based projects on Github do not contain good examples of integration tests. So, here we are, trying to close this gap.&lt;/p&gt;

&lt;p&gt;We use the TestContainers library to raise real Postgres instances during the tests. Before Docker and TestContainers, the standard de facto in the Java world was running embedded pure Java database H2, trying to mimic Postgres. It was not good, but there was not much choice then.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;evaluate&lt;/code&gt; operation integration test:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh4drl9jfdmkn6j8p2ey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh4drl9jfdmkn6j8p2ey.png" alt="Looks pretty concise and declarative." width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;obtain-past-evaluations&lt;/code&gt; operation integration test:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka7da2dgsjq8k89cf3vm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka7da2dgsjq8k89cf3vm.png" alt="Unfortunately, the downside of these integration tests is time – they are not fast tests." width="800" height="806"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the tests run, you should see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv3otb4ow326jm4k5uhu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv3otb4ow326jm4k5uhu.png" alt="Zero fails and zero errors. Awesome!" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Now, when you go through the service codebase and know its internals, you can copy-paste it for yourself, change it according to your requirements, and voila, you will have a really good-looking microservice.&lt;/p&gt;

&lt;p&gt;The described codebase is based on years of Clojure programming and a number of projects that have been implemented in Clojure. Some used libraries may look outdated, but in the Clojure world, if a library works, it is okay not to update it often—the language itself is super-stable, and you can easily read and support code written even a decade ago.&lt;/p&gt;

</description>
      <category>clojure</category>
      <category>java</category>
      <category>microservices</category>
      <category>functional</category>
    </item>
  </channel>
</rss>
