<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vinicius Mesel</title>
    <description>The latest articles on DEV Community by Vinicius Mesel (@vmesel).</description>
    <link>https://dev.to/vmesel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vmesel"/>
    <language>en</language>
    <item>
      <title>Wordpress needs an URGENT alternative!</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Wed, 09 Oct 2024 12:28:29 +0000</pubDate>
      <link>https://dev.to/vmesel/wordpress-needs-an-urgent-alternative-1dg5</link>
      <guid>https://dev.to/vmesel/wordpress-needs-an-urgent-alternative-1dg5</guid>
      <description>&lt;p&gt;Since 2012, when I started coding and learning about computer science through the online version of CS50, I became aware of the possibility of building websites using a magical CMS called Wordpress.&lt;/p&gt;

&lt;p&gt;It was indeed very magic and still is during these days.&lt;/p&gt;

&lt;p&gt;It enabled me to get a template, edit it a little bit and grasp the inner possibilities of what a CMS could do to improve my blogging capabilities (almost none).&lt;/p&gt;

&lt;p&gt;Recently, Automattic engaged on a legal battle against WP Engine (and Silver Lake, the Private Equity firm that has the majority stake in WP Engine) for a licensing deal covering the usage of Wordpress resources inside WP Engine. (Read the article: &lt;a href="https://www.cnbc.com/2024/10/05/wordpress-ceo-matt-mullenweg-goes-nuclear-on-silver-lake-wp-engine-.html" rel="noopener noreferrer"&gt;Why WordPress founder Matt Mullenweg has gone ‘nuclear’ against tech investing giant Silver Lake&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;There is only a minor GIGANTIC issue in this battle, Wordpress is released under GNU Public License v2 (aka GPLv2 for the intimates) which means that the code must be distributed and made available to anyone.&lt;/p&gt;

&lt;p&gt;Inside the license's text we have the following disclosure:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.&lt;/p&gt;

&lt;p&gt;We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I still don't know how this battle between giants is going to end, but I do know that we need to be prepared for the worst.&lt;/p&gt;

&lt;p&gt;We need better open source alternatives for deploying CMSs in a simple and easy way, with similar resources (given the limits of the development of the software).&lt;/p&gt;

&lt;p&gt;The alternatives must be reliable and must give us total access of the code just as Wordpress do under the GPLv2 and also have a community built around it to advocate for it's further development and enrichment.&lt;/p&gt;

&lt;p&gt;One of the alternatives I'm currently studying is Plone: a Python-built CMS that has been around since 2000s and is serving multiple websites (including many Brazilian government agencies, as far as I know).&lt;/p&gt;

&lt;p&gt;It's amazing how much you can build with Plone and I hope it's community can lead us to places as far as the Wordpress community brought us.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Open WebUI + talkd.ai/Dialog: RAG deployment and Development made easy with awesome UI!</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Wed, 09 Oct 2024 11:55:28 +0000</pubDate>
      <link>https://dev.to/vmesel/open-webui-talkdaidialog-rag-deployment-and-development-made-easy-with-awesome-ui-3m10</link>
      <guid>https://dev.to/vmesel/open-webui-talkdaidialog-rag-deployment-and-development-made-easy-with-awesome-ui-3m10</guid>
      <description>&lt;p&gt;Hey fellow devs and open-source enthusiasts! 🎉 We've got some awesome news that's going to supercharge the way you build and interact with RAGs. &lt;/p&gt;

&lt;p&gt;We're super excited to announce that Open WebUI is our official front-end for RAG development. It's a total match!&lt;/p&gt;

&lt;p&gt;For those who don't know what talkd.ai/Dialog is:&lt;/p&gt;

&lt;h3&gt;
  
  
  talkd.ai/Dialog: the brain of the RAGs
&lt;/h3&gt;

&lt;p&gt;Now, talkd.ai/Dialog is where the magic of conversation happens. Our dialog API makes it a breeze to build chatbots and virtual assistants that actually understand what users are saying and can ensure appropriate context is being passed to the prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why talkd.ai/Dialog is a Game-Changer:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Smart as Heck: Uses top-notch natural language processing to get what users mean.&lt;/li&gt;
&lt;li&gt;Context-Aware: Keeps track of conversation context so it doesn’t sound like a broken record.&lt;/li&gt;
&lt;li&gt;Scalable: Can handle a ton of simultaneous chats without breaking a sweat.&lt;/li&gt;
&lt;li&gt;Integration Friendly: Plug it into just about anything – WhatsApp, Telegram, Slack, you name it and you can build your own plugin!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  OpenWebUI: The Universal Web Interface
&lt;/h2&gt;

&lt;p&gt;OpenWebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of OpenWebUI:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Effortless Setup: Install seamlessly using Docker or Kubernetes for a hassle-free experience&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ollama/OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the OpenAI API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pipelines, Open WebUI Plugin Support: Seamlessly integrate custom logic and Python libraries into OpenWebUI using Pipelines Plugin Framework. Examples include Function Calling, User Rate Limiting, Usage Monitoring, Live Translation, Toxic Message Filtering, and much more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What do I need to do to use talkd.ai/dialog with Open WebUI?
&lt;/h2&gt;

&lt;p&gt;To use both, you just need to clone talkd.ai/dialog repository from GitHub using the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/talkdai/dialog.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose -f docker-compose-open-webui.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will build the docker and will enable you to access your &lt;code&gt;localhost:3000&lt;/code&gt; server of Open WebUI with talkd.ai already setup.&lt;/p&gt;

&lt;p&gt;If everything went fine, you will be able to sign up locally on Open WebUI and reach a page like the one below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72efc5oswvlgkgeb82wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72efc5oswvlgkgeb82wx.png" alt="Open WebUI + talkd.ai" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope you like this collab!&lt;/p&gt;

&lt;p&gt;Hugs from me and the talkd.ai team!&lt;/p&gt;

</description>
      <category>openwebui</category>
      <category>talkdai</category>
      <category>ui</category>
    </item>
    <item>
      <title>Open WebUI + talkd.ai/Dialog: RAG deployment and Development made easy with awesome UI!</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Tue, 25 Jun 2024 20:19:05 +0000</pubDate>
      <link>https://dev.to/vmesel/open-webui-talkdaidialog-rag-deployment-and-development-made-easy-with-awesome-ui-3gla</link>
      <guid>https://dev.to/vmesel/open-webui-talkdaidialog-rag-deployment-and-development-made-easy-with-awesome-ui-3gla</guid>
      <description>&lt;p&gt;Hey fellow devs and open-source enthusiasts! 🎉 We've got some awesome news that's going to supercharge the way you build and interact with RAGs. &lt;/p&gt;

&lt;p&gt;We're super excited to announce that Open WebUI is our official front-end for RAG development. It's a total match!&lt;/p&gt;

&lt;p&gt;For those who don't know what talkd.ai/Dialog is:&lt;/p&gt;

&lt;h3&gt;
  
  
  talkd.ai/Dialog: the brain of the RAGs
&lt;/h3&gt;

&lt;p&gt;Now, talkd.ai/Dialog is where the magic of conversation happens. Our dialog API makes it a breeze to build chatbots and virtual assistants that actually understand what users are saying and can ensure appropriate context is being passed to the prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why talkd.ai/Dialog is a Game-Changer:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Smart as Heck: Uses top-notch natural language processing to get what users mean.&lt;/li&gt;
&lt;li&gt;Context-Aware: Keeps track of conversation context so it doesn’t sound like a broken record.&lt;/li&gt;
&lt;li&gt;Scalable: Can handle a ton of simultaneous chats without breaking a sweat.&lt;/li&gt;
&lt;li&gt;Integration Friendly: Plug it into just about anything – WhatsApp, Telegram, Slack, you name it and you can build your own plugin!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  OpenWebUI: The Universal Web Interface
&lt;/h2&gt;

&lt;p&gt;OpenWebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of OpenWebUI:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Effortless Setup: Install seamlessly using Docker or Kubernetes for a hassle-free experience&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ollama/OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the OpenAI API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pipelines, Open WebUI Plugin Support: Seamlessly integrate custom logic and Python libraries into OpenWebUI using Pipelines Plugin Framework. Examples include Function Calling, User Rate Limiting, Usage Monitoring, Live Translation, Toxic Message Filtering, and much more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What do I need to do to use talkd.ai/dialog with Open WebUI?
&lt;/h2&gt;

&lt;p&gt;To use both, you just need to clone talkd.ai/dialog repository from GitHub using the command below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git clone https://github.com/talkdai/dialog.git


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And run&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker-compose -f docker-compose-open-webui.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will build the docker and will enable you to access your &lt;code&gt;localhost:3000&lt;/code&gt; server of Open WebUI with talkd.ai already setup.&lt;/p&gt;

&lt;p&gt;If everything went fine, you will be able to sign up locally on Open WebUI and reach a page like the one below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72efc5oswvlgkgeb82wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72efc5oswvlgkgeb82wx.png" alt="Open WebUI + talkd.ai"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope you like this collab!&lt;/p&gt;

&lt;p&gt;Hugs from me and the talkd.ai team!&lt;/p&gt;

</description>
      <category>openwebui</category>
      <category>talkdai</category>
      <category>ui</category>
    </item>
    <item>
      <title>talkd.ai got accepted into the Github Accelerator! (also our first official release)</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Thu, 23 May 2024 16:00:00 +0000</pubDate>
      <link>https://dev.to/vmesel/talkdai-got-accepted-into-the-github-accelerator-also-our-first-official-release-1ofc</link>
      <guid>https://dev.to/vmesel/talkdai-got-accepted-into-the-github-accelerator-also-our-first-official-release-1ofc</guid>
      <description>&lt;p&gt;Hey there!&lt;/p&gt;

&lt;p&gt;If you still don’t know us, we are &lt;a href="https://talkd.ai"&gt;talkd.ai&lt;/a&gt;, an open-source organization that is maintaining Dialog, a project focused on letting you easily deploy any LLM that you want (currently any of those available in the Langchain and partner libraries - we will cover more on that later).&lt;/p&gt;

&lt;p&gt;Today, we are very grateful for announcing two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;This is our first official public release, our "Numero Uno" and a starting point to this adventure that will be long and fun.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now it's official: We got accepted in the GitHub Accelerator 2024 AI Cohort - a Cohort full of amazing people that started on April 22nd.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is the GitHub Accelerator?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9zcvvrih6gup4tf9690.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9zcvvrih6gup4tf9690.png" alt="Github Accelerator 2024 - AI Cohort" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The GitHub accelerator is an accelerator that is running its second cohort in a slightly different approach from the usual: it focuses on making open-source sustainable and helping maintainers find sustainable ways to fund their work full-time on the projects.&lt;/p&gt;

&lt;p&gt;During the accelerator, you will be connected to references from fields such as AI, InfoSec, Successful Open Source maintainers, investment funds, and many other professionals who will guide you through the many possibilities of open-source funding.&lt;/p&gt;

&lt;p&gt;The project also provides you a stipend for 10 weeks, allowing you and your team to be full-time focused on the development of software and communications of your project, credits on OpenAI and Microsoft Azure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to the project's history: How did the project start?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflw748ka1cf20lpc3ncx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflw748ka1cf20lpc3ncx.png" alt="@vmesel and @avelinorun" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project started to help us (&lt;a href="https://github.com/avelino"&gt;Thiago Avelino&lt;/a&gt; and I) create chat experiences that resembled human behavior in answering frequently asked questions inside our contexts (Avelino inside Buser and my context of wanting to learn more about LLM deployments and maintenance).&lt;/p&gt;

&lt;p&gt;Still, as the project grew, our contexts changed a lot, as the necessity for different techniques, retrievers, optimizations, and plugins&lt;/p&gt;

&lt;p&gt;Nowadays, the project allows you to deploy any model that respects Langchain LCELs or our libraries' chain model, you can choose which one to maintain and set up from there. &lt;/p&gt;

&lt;p&gt;The process of getting to where we are right now involves lots of people, but I would like to specially:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/lgabs/"&gt;Luan Fernandes&lt;/a&gt; - our Langchain specialist and long-time contributor&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/walison17/"&gt;Walison Filipe&lt;/a&gt; - our FastAPI master and testing guru&lt;/li&gt;
&lt;li&gt;Gregg and Kevin from the GitHub Accelerator - on helping us improve our software through communications, mentorship, invaluable resources and connections&lt;/li&gt;
&lt;li&gt;Andreas, Alicia, Namee, and Jurgen - you have amazing projects and invaluable lessons on improving pitching.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What can I do with talkd.ai/dialog?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/talkdai/dialog"&gt;talkd.ai/dialog&lt;/a&gt; lets you deploy any LLM that you want (with the code already adapted to the LCEL or AbstractLLM models) in 5 minutes.&lt;/p&gt;

&lt;p&gt;With a simple &lt;code&gt;docker-compose up&lt;/code&gt; you have sample data and a sample prompt up and running.&lt;/p&gt;

&lt;p&gt;If you want to customize, you can use our library: &lt;a href="https://github.com/talkdai/dialog-lib"&gt;dialog-lib&lt;/a&gt;, to implement custom RAGs and Retrievers. We are fully integrated with SQLAlchemy, PGVector, Anthropic, and OpenAI.&lt;/p&gt;

&lt;p&gt;Here is our quick product demo to showcase you how simple is to deploy our software:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/XUm5iqPyAYo"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Here is the &lt;a href="https://github.blog/2024-05-23-2024-github-accelerator-meet-the-11-projects-shaping-open-source-ai/"&gt;official release link from GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>gpt4o</category>
      <category>talkdai</category>
    </item>
    <item>
      <title>GPT-4o: Learn how to Implement a RAG on the new model, step-by-step!</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Tue, 14 May 2024 02:48:00 +0000</pubDate>
      <link>https://dev.to/vmesel/gpt-4o-learn-how-to-implement-a-rag-on-the-new-model-step-by-step-377d</link>
      <guid>https://dev.to/vmesel/gpt-4o-learn-how-to-implement-a-rag-on-the-new-model-step-by-step-377d</guid>
      <description>&lt;p&gt;OpenAI just released their latest GPT model, GPT-4o, which is half of the price of the current GPT-4 model and way faster.&lt;/p&gt;

&lt;p&gt;In this article, we will show you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is a RAG&lt;/li&gt;
&lt;li&gt;Getting started: Setting up your OpenAI Account and API Key&lt;/li&gt;
&lt;li&gt;Using Langchain's Approach on talkdai/dialog&lt;/li&gt;
&lt;li&gt;Using GPT-4o in your content&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;PS: Before following this tutorial, we expect you to have a simple knowledge of file editing and docker.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a RAG?
&lt;/h2&gt;

&lt;p&gt;RAG is a framework that allows developers to use retrieval approaches and generative Artificial Intelligence altogether, combining the power of a large language model (LLM) with an enormous amount of information and a specific custom knowledge base.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started: Setting up your OpenAI Account and API Key
&lt;/h2&gt;

&lt;p&gt;To get started, you will need to access the OpenAI Platform website and start a new account, if you still don't have one. The link to get into the form directly is: &lt;a href="https://platform.openai.com/signup" rel="noopener noreferrer"&gt;https://platform.openai.com/signup&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xqima9lisy569x8khg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xqima9lisy569x8khg3.png" alt="Sign up screen from OpenAI Platform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whenever you submit the form with your e-mail and password, you will receive an e-mail (just like the one below) from OpenAI to activate and verify your account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yjx9onf6v4d6fney8g0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yjx9onf6v4d6fney8g0.png" alt="Verify your account e-mail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you, for some reason, don't see this e-mail in your inbox, try checking your spam. If it's still missing, just hit the resend button on the page to verify your e-mail address.&lt;/p&gt;

&lt;p&gt;And... Congrats! You now are logged in to OpenAI Platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdy0zqja1to75ni2bjn2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdy0zqja1to75ni2bjn2i.png" alt="OpenAI Platform after login"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's generate your API Token to be able to use the Chat Completion endpoint for GPT-4o, allowing you to use it with &lt;a href="https://github.com/talkdai/dialog" rel="noopener noreferrer"&gt;talkdai/dialog&lt;/a&gt;, a wrapper for Langchain's with a simple-to-use API.&lt;/p&gt;

&lt;p&gt;On the left menu, you will be able to find a sub-menu item called: API Keys, just click on it and you will be redirected to the API Keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqcnmycwm2e9fjaucof9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqcnmycwm2e9fjaucof9.png" alt="API Keys menu item"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If it's your first time setting up your account, you will be required to verify your phone number.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhds45528ft07umx7mmyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhds45528ft07umx7mmyi.png" alt="Verify your phone number pop up at the API Keys page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you verify your phone number, the &lt;code&gt;Create new secret key&lt;/code&gt; will be enabled as the following image shows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xgj0l45r2r5loweu704.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xgj0l45r2r5loweu704.png" alt="Create your secret button now is enabled"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenAI keys are now project-based, which means you will be assigned a project as a "host" of this generated key on your account, it will enable you to have more budget control and gain more observability on costs and tokens usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9hct1agxx9eqylty95c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9hct1agxx9eqylty95c.png" alt="Create a new secret key prompt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make the process easier, we will just name our API Key and use the "All" permission type, for production scenarios, this is not the best practice and we recommend you set up permissions wisely.&lt;/p&gt;

&lt;p&gt;After you hit Create, it will generate a unique secret key that you must store in a safe place, this key is only visualizable once, so if you ever need to get it again, you will need to get it from the place you saved or generate a new one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazxvxsgksu2rn3653qx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazxvxsgksu2rn3653qx1.png" alt="OpenAI API Key"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Langchain's approach on &lt;a href="https://github.com/talkdai/dialog" rel="noopener noreferrer"&gt;talkdai/dialog&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Langchain?
&lt;/h3&gt;

&lt;p&gt;Langchain is a framework that allows users to work with LLM models using chains (a concept that is the sum of a prompt, an LLM model, and other features that are extensible depending on the use case).&lt;/p&gt;

&lt;p&gt;This framework has native support for OpenAI and other LLM models, granting access to developers around the globe to create awesome applications using a Generative AI approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is talkdai/dialog?
&lt;/h3&gt;

&lt;p&gt;talkdai/dialog, or simply Dialog, is an application that we've built to help users deploy easily any LLM agent that they would like to use (an agent is simply an instance of a chain in the case of Langchain, or the joint usage of prompts and models).&lt;/p&gt;

&lt;p&gt;The key objective of talkdai/dialog is to enable developers to deploy LLMs in less than a day, without having any DevOps knowledge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up Dialog
&lt;/h3&gt;

&lt;p&gt;On your terminal, in the folder of your choice, clone our repository with the following command, in it you will have the basic structure to easily get GPT-4o up and running for you.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git clone https://github.com/talkdai/dialog


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the repository, you will need to add 3 files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your .env file, where that OpenAI API Key will stay stored (please, don't commit this file in your environment). This will solely based on the &lt;code&gt;.env.sample&lt;/code&gt; file we have on the root of the repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The file where you will define your prompt (we call it prompt.toml in the docs, but you can call it whatever you want)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And, last but not least important, a CSV file with your content.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  .env file
&lt;/h3&gt;

&lt;p&gt;Copy and paste the &lt;code&gt;.env.sample&lt;/code&gt; file to the root directory of the repository and modify it using your data.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

PORT=8000 # We recommend you use this as the default port locally
OPENAI_API_KEY= # That API Key we just fetched should be put here
DIALOG_DATA_PATH=./know.csv # The relative path for the csv file inside the root directory
PROJECT_CONFIG=./prompt.toml # The relative path for your prompt setup
DATABASE_URL=postgresql://talkdai:talkdai@db:5432/talkdai # Replace the existing value with this line
STATIC_FILE_LOCATION=static # This should be left as static
LLM_CLASS=dialog.llm.agents.lcel.runnable # This is a setting that define's the Dialog Model Instance we are running, in this case we are running on the latest LCEL version


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Prompt.toml - your prompt and model's settings
&lt;/h3&gt;

&lt;p&gt;This file will let you define &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;

&lt;span class="nn"&gt;[model]&lt;/span&gt;
&lt;span class="py"&gt;model_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"gpt-4o"&lt;/span&gt;
&lt;span class="py"&gt;temperature&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;

&lt;span class="nn"&gt;[prompt]&lt;/span&gt;
&lt;span class="py"&gt;prompt&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""
You are a nice bot, say something nice to the user and try to help him with his question, but also say to the user that you don't know totally about the content he asked for.
"""&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This file is quite straightforward, it has 2 sections: one for defining model details, which allows us to tweak temperature and model values.&lt;/p&gt;

&lt;p&gt;The second section is the most interesting: it's where you will define the initial prompt of your agent. This initial prompt will guide the operation of your agent during the instance's life.&lt;/p&gt;

&lt;p&gt;Most of the tweaks will be done by changing the prompt's text and the model's temperature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Knowledge base
&lt;/h3&gt;

&lt;p&gt;If you want to add more specific knowledge in your LLM that is very specific to your context, this CSV will allow you to do that.&lt;/p&gt;

&lt;p&gt;Right now, the CSV must have the following format:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;category - the category of that knowledge;&lt;/li&gt;
&lt;li&gt;subcategory - the subcategory of that knowledge;&lt;/li&gt;
&lt;li&gt;question - the title or the question that the content from that line answers, and&lt;/li&gt;
&lt;li&gt;content - the content in itself. This will be injected in the prompt whenever there is a similarity in the user's input with the embeddings generated by this content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a sample example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

category,subcategory,question,content
faq,football,"Whats your favorite soccer team","My favorite soccer team is Palmeiras, from Brazil. It loses some games, but its a nice soccer team"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Langchain's behind the scene
&lt;/h3&gt;

&lt;p&gt;In this tutorial, we will be using an instance of a Dialog Agent based on Langchain's LCEL. The code below is available in the repository you just cloned.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# For the sake of simplicity, I've removed some imports and comments
&lt;/span&gt;
&lt;span class="n"&gt;chat_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;openai_api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;Settings&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;configurable_fields&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ConfigurableField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GPT Model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The GPT model to use&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ConfigurableField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The temperature to use&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_messages&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nc"&gt;Settings&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;PROJECT_CONFIG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What can I help you with today?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nc"&gt;MessagesPlaceholder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;variable_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chat_history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Here is some context for the user request: {context}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;human&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{input}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_memory_instance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;generate_memory_instance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;dbsession&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;get_session&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
        &lt;span class="n"&gt;database_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;Settings&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;DATABASE_URL&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DialogRetriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;get_session&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
    &lt;span class="n"&gt;embedding_llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;EMBEDDINGS_LLM&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;format_docs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;itemgetter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;format_docs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RunnablePassthrough&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chat_history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;itemgetter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chat_history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;chat_model&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;runnable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RunnableWithMessageHistory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;chain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;get_memory_instance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;input_messages_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;history_messages_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chat_history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The first lines are where we define our model temperature and name. By default, the model will be &lt;code&gt;gpt-3.5-turbo&lt;/code&gt; and the temperature 0, but since we defined it in the prompt configuration file, it will be changed to &lt;code&gt;gpt-4o&lt;/code&gt; and the temperature to 0.1.&lt;/p&gt;

&lt;p&gt;To send a prompt inside Langchain, you need to use its template, which is what we do next on the &lt;code&gt;ChatPromptTemplate.from_messages&lt;/code&gt; instance.&lt;/p&gt;

&lt;p&gt;In the next lines, we define the memory, how it should be consumed and the instance of the retriever (where we are going to grab our custom data from).&lt;/p&gt;

&lt;h2&gt;
  
  
  Using GPT-4o in your content
&lt;/h2&gt;

&lt;p&gt;After this long setup, it's time to run our application and test it out. To do it, just run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker-compose up --build


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With this command, your docker should be up and running after the image builds.&lt;/p&gt;

&lt;p&gt;After the logs state: &lt;code&gt;Application startup complete.&lt;/code&gt;, go to your browser and put the address you've used to host your API, in the default case, it is: &lt;code&gt;http://localhost:8000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Go to the &lt;code&gt;/ask&lt;/code&gt; endpoint docs, fill in the JSON with the message you want to ask GPT and get the answer for it like the following screenshot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgkzjj0ri8i2jb8vz7qb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgkzjj0ri8i2jb8vz7qb.png" alt="It works!"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all! I hope you enjoyed this content and using talkdai/dialog as your way-to-go app for deploying Langchain RAGs and Agents.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>gpt4o</category>
      <category>talkdai</category>
    </item>
    <item>
      <title>Deploy your own ChatGPT in 5 minutes</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Thu, 25 Apr 2024 02:21:00 +0000</pubDate>
      <link>https://dev.to/vmesel/deploy-your-own-chatgpt-in-5-minutes-5d41</link>
      <guid>https://dev.to/vmesel/deploy-your-own-chatgpt-in-5-minutes-5d41</guid>
      <description>&lt;p&gt;Disclaimer: The files for this tutorial are available at: &lt;a href="https://github.com/talkdai/dialog-server-tutorial" rel="noopener noreferrer"&gt;our Github Tutorial Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you ever wondered about how you could deploy your own version of ChatGPT with your/your company's content, this is your time to shine: I have a wonderful solution for you.&lt;/p&gt;

&lt;p&gt;For those who don't know me or my project, I'm Vinnie, a software engineer and co-founder of the Talkd.AI project. We are a simple and easy to deploy app that allows any developer with some context of Docker and Python to deploy their own RAG (Retrieval-augmented Generation) the name of the type of applications that use a (vector) database and your custom content to return data from and to your LLM.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you will need to have in order to use our Dialog server?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A computer/server with a docker instance up and running&lt;/li&gt;
&lt;li&gt;A CSV file with some contents from your company (we will specify the format later on)&lt;/li&gt;
&lt;li&gt;A TOML file for setting up your model and prompts&lt;/li&gt;
&lt;li&gt;An OpenAI API Key (&lt;a href="https://platform.openai.com/docs/quickstart" rel="noopener noreferrer"&gt;here is a tutorial to generate one&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;In order to start your own deployment of our Dialog server, you will need to use the following docker-compose file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pgvector/pgvector:0.6.2-pg15&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;5432:5432'&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;talkdai&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;talkdai&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;talkdai&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./etc/db-extensions.sql:/docker-entrypoint-initdb.d/db-extensions.sql&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pg_isready"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-d"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;talkdai"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-U"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;talkdai"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;dialog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/talkdai/dialog:latest&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./data/:/app/data/&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8000:8000'&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=8000&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DATABASE_URL=postgresql://talkdai:talkdai@db:5432/talkdai&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;OPENAI_API_KEY=sk-your-openai-api-key&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;STATIC_FILE_LOCATION=/app/static&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DIALOG_DATA_PATH=../data/your.csv&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PROJECT_CONFIG=../data/your.toml&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  PGVector - Our first service
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5alzcfkb1rra5j9zaxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5alzcfkb1rra5j9zaxg.png" alt="PGVector + PSQL + OpenAI = &amp;lt;3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first service of this docker-compose file is a PostgreSQL instance equipped with pgvector, our vector database implementation of choice since it's already inside a major open-source database vendor.&lt;/p&gt;

&lt;p&gt;On this service, we've configured a volume mapping to a file, this file's content is the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;EXTENSION&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This file only enables the vector plugin if it doesn't exist already in your database. If you are using another implementation of a PGVector Docker image that already runs this command, you can ignore this volume mounting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Second Service - Dialog Server
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoy7bf4wi11jd6wn1cl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoy7bf4wi11jd6wn1cl8.png" alt="talkd.ai = &amp;lt;3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the second service is our Dialog Server, it's our implementation of the RAG approach built on top of FastAPI, LangChain and PGVector Python extension.&lt;/p&gt;

&lt;p&gt;We also have a volume mapping inside this service:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./data/:/app/data/&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This volume mapping allow us to provide our prompt information and your custom content in the CSV format.&lt;/p&gt;

&lt;p&gt;The CSV file for loading your own ChatGPT must have the following columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;category&lt;/em&gt; - The category of that content&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;subcategory&lt;/em&gt; - The subcategory of that content&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;question&lt;/em&gt; - The question/title of that content&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;content&lt;/em&gt; - The content in itself, this content will generate the embeddings for our database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On our example, we will add some questions regarding sports:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

category,subcategory,question,content
faq,football,"Whats your favorite soccer team","My favorite soccer team is Palmeiras, from Brazil."
faq,football,"Whats your favorite soccer player","My favorite soccer player is Neymar, from Brazil."


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Inside our prompt file (the .toml one I've mentioned before), we will configure our prompt and how the bot will speak, which model and temperature it will use and a fallback message in order it can generate answers on topics that it didn't find a content for a certain user message.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;

&lt;span class="nn"&gt;[model]&lt;/span&gt;
&lt;span class="py"&gt;model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"gpt-3.5-turbo-1106"&lt;/span&gt;
&lt;span class="py"&gt;temperature&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;

&lt;span class="nn"&gt;[prompt]&lt;/span&gt;
&lt;span class="py"&gt;header&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""You are Robert, an AI bot that answers any questions that a user may have."""&lt;/span&gt;
&lt;span class="py"&gt;question_signalizer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Answer the following question:"&lt;/span&gt;

&lt;span class="py"&gt;suggested&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""Just answer the following question, don't change context and don't reply using more than 50 words."""&lt;/span&gt;

&lt;span class="py"&gt;fallback&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""
Your a nice bot, say something nice to the user and try to help him with his question, but also say to the user that you don't know totally about the content he asked.
"""&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Finishing the setup
&lt;/h4&gt;

&lt;p&gt;With all of these files saved inside the &lt;code&gt;./data/&lt;/code&gt; directory,  we are able to do the last part of the setup: setting our environment variables in order to deploy our bot!&lt;/p&gt;

&lt;p&gt;The environment variables that must be filled are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PORT - the Port where the web server is allowed to be started on (we recommend using 8000, but you can set your own)&lt;/li&gt;
&lt;li&gt;DATABASE_URL - the database URL (like &lt;code&gt;postgresql://talkdai:talkdai@db:5432/talkdai&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;OPENAI_API_KEY - The generated OpenAI API Key&lt;/li&gt;
&lt;li&gt;DIALOG_DATA_PATH - The relative path from /app/ for the data CSV file you mapped inside your docker compose volume. (Our standard is to use the folder &lt;code&gt;/app/data&lt;/code&gt; to host this file, so the value of this variable must be: &lt;code&gt;../data/your.csv&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;PROJECT_CONFIG - The Prompt file location. The explanation is the same as above, but for the toml file. In this tutorial, we will set this as &lt;code&gt;../data/your.toml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Final step: Deploying and running
&lt;/h2&gt;

&lt;p&gt;Now that we finally have our setup done, we can start our Dialog Server instance by running:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker-compose up &lt;span class="c"&gt;# -d # if you want to run it on background&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will need to wait a while until it builds everything, get our pgvector instance up and running and create the embeddings for your custom content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oey68jzv8v5qre7decp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oey68jzv8v5qre7decp.png" alt="Dialog Setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it finishes, your docker-compose log will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqglj0ra7ceojnzxbl9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqglj0ra7ceojnzxbl9b.png" alt="Dialog Setup Done"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To access the API Swagger, you will go in your browser and use the following URL (or the equivalent for your setup): &lt;code&gt;http://localhost:8000/docs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If everything is fine, the following page must have loaded and you can now start asking questions to your own ChatGPT instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xo2d8lxcfo1w95ewobs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xo2d8lxcfo1w95ewobs.png" alt="Dialog Server Docs page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the &lt;code&gt;/ask&lt;/code&gt; endpoint, you will be able to send any message you want the ChatGPT to answer, such as &lt;code&gt;Whats your NBA team of heart?&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5bwznx6quijnhznufj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5bwznx6quijnhznufj2.png" alt="Asking dialog server what is it's favorite NBA team indirectly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And after hitting send, the answer comes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84f3m3b3ev7dkycb983b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84f3m3b3ev7dkycb983b.png" alt="Golden State Warrior is this Dialog's Instance Team of heart"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you follow through this tutorial, you will be able to deploy your own ChatGPT with custom content for your company or your website.&lt;/p&gt;

&lt;p&gt;Leave a like in this post if you enjoyed and share it with your friends.&lt;/p&gt;

&lt;p&gt;Also, if you want more information on dialog and talkd.ai, access our GitHub page: &lt;a href="https://github.com/talkdai/" rel="noopener noreferrer"&gt;https://github.com/talkdai/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>python</category>
      <category>dialog</category>
      <category>talkdai</category>
    </item>
    <item>
      <title>pRESTd release 1.1.4 - Query performance improvement</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Tue, 08 Nov 2022 18:04:13 +0000</pubDate>
      <link>https://dev.to/prestd/prestd-release-114-query-performance-improvement-25fn</link>
      <guid>https://dev.to/prestd/prestd-release-114-query-performance-improvement-25fn</guid>
      <description>&lt;p&gt;If you want to use your database as a data source for an API you are creating, but you are not happy with building it from scratch: pRESTd is for you!&lt;/p&gt;

&lt;p&gt;Basically pRESTd is a software that enables you to perform queries through API calls on your whole database system, improving delivery time for your development team.&lt;/p&gt;

&lt;p&gt;In this new release (1.1.4), we were able to work on a performance improvement on our querying system. Our base query used to use the following query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="n"&gt;json_agg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;your_awesome_table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which, &lt;a href="https://github.com/prest/prest/issues/730#issuecomment-1303886877"&gt;on our analysis&lt;/a&gt;, was 5x slower than using JSONB aggregation function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---TVZBRRB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u642rrrtzhy0iqbn1md.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---TVZBRRB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u642rrrtzhy0iqbn1md.png" alt="Comparison between Serialization methods on pREST codebase" width="880" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see above, implementing the functionality in Golang gives us better performance on smaller scenarios, but as we grow our data requests, the linearity of the equation is not followed.&lt;/p&gt;

&lt;p&gt;That's why we chose to implement with JSONB_AGG as our serializer on the database end, enabling us to save time on development and test writing.&lt;/p&gt;

&lt;p&gt;You can see the requests data available in our PR, on the subtitles below.&lt;/p&gt;

&lt;h3&gt;
  
  
  With JSON_AGG()
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2022/11/04 16:54:35 [warning] adapter is not set. Using the default (postgres)
2022/11/04 16:54:35 [warning] command.go:920 You are running prestd in debug mode.
[prestd] listening on 0.0.0.0:80 and serving on /
2022/11/04 16:54:41 [debug] server.go:2084 generated SQL:SELECT json_agg(s) FROM (SELECT * FROM "database"."public"."table" LIMIT 1000 OFFSET(1 - 1) * 1000) s parameters: []
[negroni] 2022-11-04T16:54:41Z | 200 |   5.11392325s | 54.186.223.54 | GET /database/public/table
2022/11/04 16:55:18 [debug] server.go:2084 generated SQL:SELECT json_agg(s) FROM (SELECT * FROM "database"."public"."table" LIMIT 100 OFFSET(1 - 1) * 100) s parameters: []
[negroni] 2022-11-04T16:55:18Z | 200 |   504.78063ms | 54.186.223.54 | GET /database/public/table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  With JSONB_AGG()
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[prestd] listening on 0.0.0.0:80 and serving on /
2022/11/04 16:59:26 [debug] server.go:2084 generated SQL:SELECT jsonb_agg(s) FROM (SELECT * FROM "database"."public"."table" LIMIT 100 OFFSET(1 - 1) * 100) s parameters: []
[negroni] 2022-11-04T16:59:26Z | 200 |   479.260256ms | 54.186.223.54 | GET /database/public/table
2022/11/04 17:00:05 [debug] server.go:2084 generated SQL:SELECT jsonb_agg(s) FROM (SELECT * FROM "database"."public"."table" LIMIT 1000 OFFSET(1 - 1) * 1000) s parameters: []
[negroni] 2022-11-04T17:00:05Z | 200 |   1.912713761s | 54.186.223.54 | GET /database/public/table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  About the benchmark
&lt;/h3&gt;

&lt;p&gt;We used a table with 56 million rows, two indexes (on the primary date and id fields) and both of the API calls (and queries) were paginated.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>prestd</category>
      <category>api</category>
    </item>
    <item>
      <title>What the heck are Python Generator Expressions and List Comprehensions?</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Sun, 10 Feb 2019 12:29:58 +0000</pubDate>
      <link>https://dev.to/vmesel/what-the-heck-are-python-generator-expressions-and-list-comprehensions-55hh</link>
      <guid>https://dev.to/vmesel/what-the-heck-are-python-generator-expressions-and-list-comprehensions-55hh</guid>
      <description>

&lt;p&gt;Have you ever thought on initializing a list with its values with a one liner expression? That's totally possible in Python using list comprehensions. But sometimes you wont use the list stored value for a while and you just wanted to initialize the values for a future use... That's a simple use case of  or Generator Expressions.&lt;/p&gt;

&lt;p&gt;PS: If you have read Fluent Python from &lt;a class="comment-mentioned-user" href="https://dev.to/ramalhoorg"&gt;@ramalhoorg&lt;/a&gt;
, there is nothing new right here, but you can share this text to a friend, so she will be able to learn more about Python.&lt;/p&gt;

&lt;p&gt;PS 2: Thanks &lt;a class="comment-mentioned-user" href="https://dev.to/ramalhoorg"&gt;@ramalhoorg&lt;/a&gt;
 for the examples on the book, they were very useful and some of them are used right here!&lt;/p&gt;

&lt;h2&gt;
  
  
  What are List Comprehensions?
&lt;/h2&gt;

&lt;p&gt;List comprehensions are a list expression that creates a list with values already inside it, take a look at the example below:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;my_incredible_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;my_incredible_list&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This list comprehension is the same as if you were doing a for loop appending values to a list.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;my_incredible_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt;     &lt;span class="n"&gt;my_incredible_list&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;my_incredible_list&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With list comprehensions, you are able to generate lists with tuples inside iterating two other lists.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;foods&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"apple banana sausages"&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;split&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# We declare a list here with foods
&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;beverages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"coca-cola pepsi guarana"&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;split&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# We declare a list of beverages
&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="n"&gt;food&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;beverage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;food&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;foods&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;beverage&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;beverages&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="s"&gt;'apple'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'coca-cola'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'apple'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'pepsi'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'apple'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'guarana'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'banana'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'coca-cola'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'banana'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'pepsi'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'banana'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'guarana'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'sausages'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'coca-cola'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'sausages'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'pepsi'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'sausages'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'guarana'&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  What are Generator Expressions?
&lt;/h2&gt;

&lt;p&gt;So generator expressions are just like a list comprehension, but it doesn't store it's values on the memory, it only generate the value you are iterating on when the generator is called. Take a look at the example below:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;my_incredible_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;my_incredible_list&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;generator&lt;/span&gt; &lt;span class="nb"&gt;object&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;genexpr&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;at&lt;/span&gt; &lt;span class="mh"&gt;0x7f14149a3db0&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To check for a value on a generator you can either iterate over the generator using a for loop or you can check the values using the &lt;code&gt;next()&lt;/code&gt; operator.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&amp;gt;&amp;gt;&amp;gt; # Using the next operator
&amp;gt;&amp;gt;&amp;gt; next(my_incredible_list)
0
&amp;gt;&amp;gt;&amp;gt; next(my_incredible_list)
1
&amp;gt;&amp;gt;&amp;gt; next(my_incredible_list)
2 
&amp;gt;&amp;gt;&amp;gt; next(my_incredible_list)
3
&amp;gt;&amp;gt;&amp;gt; next(my_incredible_list)
4
&amp;gt;&amp;gt;&amp;gt; next(my_incredible_list)
Traceback (most recent call last):
  File "&amp;lt;stdin&amp;gt;", line 1, in &amp;lt;module&amp;gt;
StopIteration
&amp;gt;&amp;gt;&amp;gt;
&amp;gt;&amp;gt;&amp;gt; # Using a for loop
&amp;gt;&amp;gt;&amp;gt; for item in my_incredible_list:
...     print(item)
... 
0
1
2
3
4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Checking out the Assembly of both of them to see the differences
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Assembly instructions from Python disassambler
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; import dis
&amp;gt;&amp;gt;&amp;gt; my_incredible_list = [x for x in range(5)]
&amp;gt;&amp;gt;&amp;gt; dis.dis(my_incredible_list)
Traceback (most recent call last):
  File "&amp;lt;stdin&amp;gt;", line 1, in &amp;lt;module&amp;gt;
  File "/usr/lib/python3.6/dis.py", line 67, in dis
    type(x).__name__)
TypeError: don't know how to disassemble list objects
&amp;gt;&amp;gt;&amp;gt; my_incredible_list = (x for x in range(5))
&amp;gt;&amp;gt;&amp;gt; dis.dis(my_incredible_list)
  1           0 LOAD_FAST                0 (.0)
        &amp;gt;&amp;gt;    2 FOR_ITER                10 (to 14)
              4 STORE_FAST               1 (x)
              6 LOAD_FAST                1 (x)
              8 YIELD_VALUE
             10 POP_TOP
             12 JUMP_ABSOLUTE            2
        &amp;gt;&amp;gt;   14 LOAD_CONST               0 (None)
             16 RETURN_VALUE
&amp;gt;&amp;gt;&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So as we see right here, the disassemble of the list comprehension is not possible, because we cannot disassemble a data structure, but we can still disassemble the generator expression.&lt;/p&gt;

&lt;p&gt;When we run &lt;code&gt;dis.dis(&amp;lt;genexpr&amp;gt; right here)&lt;/code&gt; we are able to see that the expression is treated like a queue, we yield the value and then we pop it, so it's not called anymore.&lt;/p&gt;

&lt;p&gt;If we call the &lt;code&gt;dis.dis(&amp;lt;listcomp&amp;gt; here)&lt;/code&gt; we are able to see the running procedure to create a list using the listcomp expression:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; dis.dis("[x for x in range(5)]")
  1           0 LOAD_CONST               0 (&amp;lt;code object &amp;lt;listcomp&amp;gt; at 0x7f14149c84b0, file "&amp;lt;dis&amp;gt;", line 1&amp;gt;)
              2 LOAD_CONST               1 ('&amp;lt;listcomp&amp;gt;')
              4 MAKE_FUNCTION            0
              6 LOAD_NAME                0 (range)
              8 LOAD_CONST               2 (5)
             10 CALL_FUNCTION            1
             12 GET_ITER
             14 CALL_FUNCTION            1
             16 RETURN_VALUE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So as we see, the MAKE_FUNCTION piece is run generating the values for the list, we instantiate the function &lt;code&gt;range&lt;/code&gt;, load a value to it (5), we call it and them we save it to the list instead of just yielding the value every time the generator is called.&lt;/p&gt;


</description>
      <category>python</category>
      <category>listcomp</category>
      <category>genexpr</category>
      <category>listcomprehension</category>
    </item>
    <item>
      <title>Nomad Couch: Find a tech nomad to share a place with you </title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Sun, 04 Nov 2018 21:20:25 +0000</pubDate>
      <link>https://dev.to/vmesel/nomad-couch-find-a-tech-nomad-to-share-a-place-with-you--1o49</link>
      <guid>https://dev.to/vmesel/nomad-couch-find-a-tech-nomad-to-share-a-place-with-you--1o49</guid>
      <description>

&lt;p&gt;Hey Guys, My friend Guilherme is going to live in Thai for a year and he is searching for indie hacker/nomad interested in sharing a place with him for a week.&lt;/p&gt;

&lt;p&gt;To help Digital Nomads find their couches (like a couch surfing app), we've created Nomad Couch, a online spreadsheet to find your next nomad family to stay with and enjoy new moments together.&lt;/p&gt;

&lt;p&gt;Help us with your feedback - &lt;a href="http://nomadcouch.xyz"&gt;http://nomadcouch.xyz&lt;/a&gt;&lt;/p&gt;


</description>
      <category>nomad</category>
      <category>digitalnomad</category>
    </item>
    <item>
      <title>Why Should You Care About Software Maintainability?</title>
      <dc:creator>Vinicius Mesel</dc:creator>
      <pubDate>Mon, 22 Oct 2018 02:08:44 +0000</pubDate>
      <link>https://dev.to/vmesel/why-should-you-care-about-software-maintainability-9e</link>
      <guid>https://dev.to/vmesel/why-should-you-care-about-software-maintainability-9e</guid>
      <description>

&lt;p&gt;Real quick: This essay is mainly written for those who are not a software developer yet, or don't code as their principal source of income.&lt;/p&gt;

&lt;p&gt;Hey guys, as you all may know (for those who don't, I'm going to explain it real quick here) I changed jobs, and now, I'm working for a bank in Brazil (I won't share it's name here as I'm not paid for advertising them). Coding is a side thing in my career there, I mainly work with investment analysis inside Excel (VBA) and other Microsoft applications. &lt;del&gt;Investment things are very nice, but Excel sucks.&lt;/del&gt;&lt;/p&gt;

&lt;p&gt;As I've seen there, lots of people from Investment Banking are using programming languages to get their work done faster and better than before, which is great, but the point is: &lt;strong&gt;Are those codes produced by non-software engineer people maintainable in a production scenario?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Software engineers who care about their code quality, make their code thinking of: error handling, data IO, test-driven development, code readability, code complexity and many other aspects that beginners and non-expert people.&lt;/p&gt;

&lt;h2&gt;Why should I f*****g care about software maintainability?&lt;/h2&gt;

&lt;p&gt;If your code will only be used by you for a task, you shouldn't care a lot for these, your code needs to solve your problem mainly, but when you are working with multiple people in the same project/field/area/department/whatever, you won't be the only one who will use what you are coding.&lt;/p&gt;

&lt;p&gt;For maintainability purposes, Python, created two PEPs that should be followed as a good practice for software engineering. Those PEPs are:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;&lt;a href="https://www.python.org/dev/peps/pep-0008/"&gt;PEP 8 - Which states how a user should write code (mainly style guide)&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href="https://www.python.org/dev/peps/pep-0020/"&gt;PEP 20 - import this; or mainly known as Zen of Python (this little text that is below here)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As shown above, the simplicity and readability counts when developing new software that will be used.&lt;/p&gt;

&lt;p&gt;Imagine if you get into a company and there is no documentation about that code, no deploy instructions, no development setup, just the code there. Would you be able to pick it up and continue working where the other people left it? Probably not, we all need best practices for better code maintenance and developers on-boarding on new systems! Maintaining a big code base without good practices is like this meme:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TxYrYi9a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.geek.com/wp-content/uploads/2016/08/this-is-fine-meme-625x350.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TxYrYi9a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.geek.com/wp-content/uploads/2016/08/this-is-fine-meme-625x350.jpg" alt="Image result for everything is fine dog meme"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The best coding practices are those who all developers in your team know and apply, as well as can be passed to new developers. Here are some use cases of best practices that may be useful:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;If well written comments (or docstrings) are helpful inside your code base and guide new developers to the maintenance, keep them and update those that need to be updated. Don't keep dull comments, such as:&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Example of a dull comment

a = 1 + 2 # this line adds 1 with 2 and saves to

# Example of a needed comment

def really_cool_function_that_sums_to_numbers(x, y):
    # This function sums X and Y and returns the result

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you need to comment multiple lines of your code to make it understandable, its a good time to think on how to improve it. Code should be readable and easily understandable (PEP 20 lesson here)&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Functions should do just a single function (sounds redundant, right?) as like Pure Functions (concept from functional programming that states functions that cannot cause side effects to global variables and other functions).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are plenty of best practices and code styling guides to follow according to your language of choice. The main point of this text is: remember that other developers will maintain your code on a near future, and you will probably maintain it too and won't remember lots of things. Make your code simple so people understand it and continue your work.&lt;/p&gt;

&lt;p&gt;Originally published on: &lt;a href="http://www.vmesel.com/2018/10/22/care-with-software-maintainability/"&gt;Vmesel's blog&lt;/a&gt;&lt;/p&gt;


</description>
      <category>python</category>
      <category>maintainability</category>
      <category>softwareengineering</category>
    </item>
  </channel>
</rss>
