<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Raphael Araújo</title>
    <description>The latest articles on DEV Community by Raphael Araújo (@raphox).</description>
    <link>https://dev.to/raphox</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/raphox"/>
    <language>en</language>
    <item>
      <title>Integrating Google Firebase Firestore with ChatGPT API. Saving money</title>
      <dc:creator>Raphael Araújo</dc:creator>
      <pubDate>Fri, 14 Apr 2023 11:55:46 +0000</pubDate>
      <link>https://dev.to/raphox/integrating-google-firebase-firestore-with-chatgpt-api-saving-money-291d</link>
      <guid>https://dev.to/raphox/integrating-google-firebase-firestore-with-chatgpt-api-saving-money-291d</guid>
      <description>&lt;p&gt;As commented in &lt;a href="https://medium.com/@raphox/trying-to-maintain-a-workable-budget-creating-a-chatbot-using-gpt-and-vector-database-a04931040698"&gt;my previous post&lt;/a&gt;, I developed a services architecture to save a little when consuming the OpenAI API and the gpt-3.5-turbo model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hg4LG3OL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2Asnj5w5lab75v34pK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hg4LG3OL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2Asnj5w5lab75v34pK.png" alt="The final version of the architecture from my previous post." width="634" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I’m going to show you some of the code that I inserted in &lt;a href="https://firebase.google.com/products/functions"&gt;Firebase functions&lt;/a&gt; that is called every time a new question is inserted in Firestore.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
This function will send the question data registered by the user to a service on &lt;a href="https://render.com/"&gt;Render.com&lt;/a&gt; where the API of the ChatGPT model will be consumed.

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rcdRYIce--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2AUj5R60YIeWOK02Xr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rcdRYIce--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2AUj5R60YIeWOK02Xr.png" alt="Structure of question on Firestore" width="700" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s worth remembering why I didn’t do everything on the Firebase Cloud Function side:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; charges based on how long your function runs, as well as the number of invocations and provisioned resources. As the &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; API can be slow to respond depending on the complexity of your query, you could end up paying a lot for the time your function is waiting for the API response.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At the end of the process, the answer to the question will be updated in Firestore based on the data received from the ChatGPT API.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;We can highlight some important snippets of the previous code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lines 13 and 14:&lt;/strong&gt; These are custom methods that communicate with Pinecone and the OpenAI API. I suggest looking for more information at &lt;a href="https://python.langchain.com/en/latest/use_cases/question_answering.html"&gt;https://python.langchain.com/en/latest/use_cases/question_answering.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Line 60:&lt;/strong&gt; In the previous lines, the code is responsible for searching the database of questions already asked by users and finding the most similar question. Based on the most similar question ever asked before, line 60 is responsible for checking whether the similarity is so close (95%) that the answer from the previous question can be used to answer the new question. As I commented in my previous post, this comparison would not do very well to different questions such as: How much does 1 kg of your product cost?’ and ‘How much does 1g of your product cost?’.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Line 71:&lt;/strong&gt; This part of the code solved my problem with the OpenAI API delay. Some may wonder why I haven’t used something related to background processing queues. But as I mentioned in the previous post, my goal, for now, is to look for cheaper alternatives. Hiring a Redis bank and a full-time worker is not my current plan. But changing that is definitely one of my future plans.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Documents that can help you in the development:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://firebase.google.com/docs/firestore/quickstart"&gt;Getting started with Cloud Firestore&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://firebase.google.com/docs/functions/get-started"&gt;Get started now: write, test, and deploy the first functions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://flask-httpauth.readthedocs.io/en/latest/"&gt;https://flask-httpauth.readthedocs.io/en/latest/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://zoltan-varadi.medium.com/flask-api-how-to-return-response-but-continue-execution-828da40881e7"&gt;https://zoltan-varadi.medium.com/flask-api-how-to-return-response-but-continue-execution-828da40881e7&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Integrando a API do ChatGPT com o Google Firebase Firestore</title>
      <dc:creator>Raphael Araújo</dc:creator>
      <pubDate>Thu, 06 Apr 2023 16:05:00 +0000</pubDate>
      <link>https://dev.to/raphox/integrando-a-api-do-chatgpt-com-o-google-firebase-firestore-34pk</link>
      <guid>https://dev.to/raphox/integrando-a-api-do-chatgpt-com-o-google-firebase-firestore-34pk</guid>
      <description>&lt;p&gt;Como comentado no meu &lt;a href="https://dev.to/raphox/tentando-nao-ficar-pobre-antes-de-ficar-rico-criando-uma-startup-de-servicos-de-inteligencia-artificial-1mag"&gt;post anterior&lt;/a&gt;, eu desenvolvi uma arquitetura de serviços para tentar economizar um pouco na hora de consumir a API da OpenAI e o modelo &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;gpt-3.5-turbo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fw9WMYml--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2AaCp-zoKjt2LKnXka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fw9WMYml--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2AaCp-zoKjt2LKnXka.png" alt="Arquitetura final descrita no [post anterior](https://dev.to/raphox/tentando-nao-ficar-pobre-antes-de-ficar-rico-criando-uma-startup-de-servicos-de-inteligencia-artificial-1mag)." width="634" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agora vou mostrar um pouco do código que inseri no &lt;a href="https://firebase.google.com/products/functions"&gt;Firebase functions&lt;/a&gt; que é chamado toda vez uma nova perguntar é inserida no Firestore.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Essa função irá enviar os dados da pergunta cadastrada pelo usuário, para um serviço no &lt;a href="https://render.com/"&gt;Render.com&lt;/a&gt; onde a API do modelo do ChatGPT será consumida.

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C8IdWaYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2Ah2p7UmotFpeZjiBOnUspeQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C8IdWaYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2Ah2p7UmotFpeZjiBOnUspeQ.png" alt="Estrutura de perguntas no Firestore." width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vale relembrar o motivo de eu não ter feito tudo do lado do Firebase Cloud Function:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;O &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; cobra de acordo com o tempo de execução da sua função, além do número de invocações e dos recursos provisionados. Como a API do &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; pode demorar para responder, dependendo da complexidade da sua consulta, você pode acabar pagando muito pelo tempo que a sua função fica aguardando a resposta da API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ao fim do processo, a pergunta terá sua resposta atualizada no Firestore com base nos dados recibidos da API do ChatGPT.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Podemos destacar alguns trechos importantes do código anterior:

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Linhas 13 e 14:&lt;/strong&gt; São métodos customizados que fazem a comunicação com o Pinecone e a API da OpenAI. Sugiro buscar mais informações em &lt;a href="https://python.langchain.com/en/latest/use_cases/question_answering.html"&gt;https://python.langchain.com/en/latest/use_cases/question_answering.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Linha 60:&lt;/strong&gt; Nas linhas anteriores, o código é responsável por buscar na base de dados de perguntas já realizadas pelos usuários e encontrar a pergunta mais parecida. Com base na pergunta mais parecida já feita anteriormente, a linha 60 é responsável por verificar se a similaridade é tão próxima (95%) que a resposta da pergunta anterior pode ser utilizada para responder a nova pergunta. Como comentei no meu post anterior, esse comparativo não se daria muito bem para diferenciar perguntas como: Quanto custa 1kg do seu produto?’ e ‘Quanto custa 1g do seu produto?’.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Linha 71:&lt;/strong&gt; Essa parte do código foi responsável por sanar meu problema com o delay da API da OpenAI. Alguns podem estar se perguntando o porquê de eu não ter utilizado algo relacionado a filas de processamento em background. Mas como comentei no &lt;a href="https://dev.to/raphox/tentando-nao-ficar-pobre-antes-de-ficar-rico-criando-uma-startup-de-servicos-de-inteligencia-artificial-1mag"&gt;post anterior&lt;/a&gt;, meu objetivo por agora é buscar alternativas mais baratas. Contratar um banco Redis e um worker em periodo integral, não está nos meu planos por agora. Mas, mudar isso, com certeza está em um dos meu planos futuros.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Documentações que podem lhe ajudar no desenvolvimento:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://firebase.google.com/docs/firestore/quickstart"&gt;Para começar com o Cloud Firestore&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://firebase.google.com/docs/functions/get-started"&gt;Comece agora: escrever, testar e implantar as primeiras funções&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Referências:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://flask-httpauth.readthedocs.io/en/latest/"&gt;https://flask-httpauth.readthedocs.io/en/latest/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://zoltan-varadi.medium.com/flask-api-how-to-return-response-but-continue-execution-828da40881e7"&gt;https://zoltan-varadi.medium.com/flask-api-how-to-return-response-but-continue-execution-828da40881e7&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Trying to Maintain a Workable Budget Creating a Chatbot Using GPT and Vector Database</title>
      <dc:creator>Raphael Araújo</dc:creator>
      <pubDate>Wed, 05 Apr 2023 10:47:03 +0000</pubDate>
      <link>https://dev.to/raphox/trying-to-maintain-a-workable-budget-creating-a-chatbot-using-gpt-and-vector-database-5hg5</link>
      <guid>https://dev.to/raphox/trying-to-maintain-a-workable-budget-creating-a-chatbot-using-gpt-and-vector-database-5hg5</guid>
      <description>&lt;p&gt;In this post, I’ll show you how to use a vector database to lower GPT &lt;a href="https://openai.com/pricing"&gt;token costs&lt;/a&gt; in a Q&amp;amp;A application. The vector database I chose was &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt;, which allows you to store and query high-dimensional vectors in an efficient and scalable way. The idea is to turn the questions and answers into vectors using a pre-trained natural language model, such as the &lt;a href="https://platform.openai.com/docs/models/embeddings"&gt;text-embedding-ada-002&lt;/a&gt; model, and then use &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; to find the most similar answers to users’ questions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Vector databases can provide better text query results than SQL databases because they use a mathematical representation of the data called a vector, which allows you to measure the similarity between documents and queries using operations such as distance or angle. SQL databases, on the other hand, use a structured query language (SQL) that can be limited to searching for exact or fuzzy matches of words or phrases using the CONTAINS command. Also, SQL databases can require more resources and time to index and search large amounts of text data than vector databases.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I used Firebase Firestore, a NoSQL database in the cloud that offers real-time sync and offline support to manage the questions. Each question is stored in a document with a unique identifier and a field for the answer. Firestore also lets you create functions in the cloud that are triggered &lt;a href="https://firebase.google.com/docs/firestore?hl=pt-br"&gt;by&lt;/a&gt; events in the database, such as creating, updating, or deleting documents. I used a &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; to fire an event whenever a new question is registered in Firestore. This function is responsible for sending the question to the &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; API, a service that uses the &lt;a href="https://openai.com/pricing"&gt;GPT-3.5-turbo&lt;/a&gt; model to generate conversational responses. The &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; returns the answer in text format, which is then converted to a vector using the same natural language model that was used for the questions. This vector is sent to &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt;, which stores the vector for future new queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y5B7-ky1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AkIVdrY2jIFEu2cYB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y5B7-ky1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AkIVdrY2jIFEu2cYB.png" alt="Infrastructure in version 1." width="619" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One advantage of using &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; is that it allows you to do similarity queries using the question and answer vectors. So when a user asks a question, I don’t need to send the question to &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; and spend GPT tokens. I can simply convert the query into an array and send it to &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt;, which returns the identifiers of the most similar arrays. I can adopt an existing answer as the question’s answer based on the similarity score between the new question and questions that have been asked before. This significantly reduces GPT token costs, as I only need to use &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; to generate answers to new or significantly different questions than existing ones.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It can be quite a challenge to find the exact answer to the user’s question based on previous questions. Example: ‘How much does 1kg of your product cost?’ or ‘How much does 1g of your product cost?’. Possibly the score of the vectorized text of the two questions will have a score of almost 1.0. This can be a problem depending on your interest. I haven’t come up with anything concrete to solve this problem, but I believe there are ways around this by defining differentiations between some words such as ‘kilogram’ and ‘gram’.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To compare the cost between &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; queries and the &lt;a href="https://openai.com/pricing"&gt;GPT-3.5-turbo&lt;/a&gt; model , I used the following values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The price of &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; is $0.096 / hour.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; price is $0.002 / 1K gpt-3.5-turbo tokens.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Tokens are common strings of characters found in text. GPT processes text using tokens and understands the statistical relationships between them. Tokens can include spaces and even subwords. The maximum number of tokens that GPT can take as input depends on the model and tokenizer used. For example, the text-embedding-ada-002 template uses the cl100k_base tokenizer and can receive up to 8191 tokens.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I even considered using &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; as the hosting platform for my &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; query code, but it would bring me high financial costs. That’s because &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; charges based on your function’s runtime and the number of invocations and provisioned resources. &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;As the ChatGPT&lt;/a&gt; API can be slow to respond depending on the complexity of your query, you could end up paying a lot for the time your function is waiting for the API response.&lt;/p&gt;

&lt;p&gt;One way I found to reduce costs with &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; is to use a web service on &lt;a href="https://render.com/"&gt;Render.com&lt;/a&gt;, where it is possible to create an application in &lt;a href="https://flask.palletsprojects.com/"&gt;Flask&lt;/a&gt; where I managed to use threads so as not to wait for the GPT API to respond before responding to the &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt;. Render.com is a platform that lets you host web applications in a simple and inexpensive way&lt;a href="https://render.com/"&gt;,&lt;/a&gt; with plans starting at $7 per month. The idea is to create an intermediate layer between the &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; and the &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt;, which receives the question from the &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; and sends an immediate response saying the response is being generated. Then, the Flask application creates a thread to send the question to the &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; API and once it gets the answer, it updates the question in &lt;a href="https://firebase.google.com/docs/firestore?hl=pt-br"&gt;Firestore&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wXB1BjnB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A1hy3pcbeYAbS2EzO.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wXB1BjnB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A1hy3pcbeYAbS2EzO.png" alt="Infrastructure in version 2." width="634" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had thought of describing the project's source code in this post. But I think that for now, it is worth reflecting on the applied infrastructure. Later I will create new posts to detail more about the NodeJS project inserted in Google Cloud Function and the project in Python (Flask) hosted on Render.com.&lt;/p&gt;

&lt;p&gt;The web application for inserting and querying questions in Firestore is still being developed. I intend to publish it soon as well.&lt;/p&gt;

</description>
      <category>gpt</category>
      <category>firestore</category>
      <category>googlecloudfunction</category>
    </item>
    <item>
      <title>Tentando não ficar pobre antes de ficar rico criando uma Startup de serviços de inteligência artificial</title>
      <dc:creator>Raphael Araújo</dc:creator>
      <pubDate>Tue, 04 Apr 2023 20:17:18 +0000</pubDate>
      <link>https://dev.to/raphox/tentando-nao-ficar-pobre-antes-de-ficar-rico-criando-uma-startup-de-servicos-de-inteligencia-artificial-1mag</link>
      <guid>https://dev.to/raphox/tentando-nao-ficar-pobre-antes-de-ficar-rico-criando-uma-startup-de-servicos-de-inteligencia-artificial-1mag</guid>
      <description>&lt;p&gt;Neste post, vou mostrar como utilizar um banco de dados vetorial para diminuir os &lt;a href="https://openai.com/pricing"&gt;custos com tokens&lt;/a&gt; do GPT em uma aplicação de perguntas e respostas. O banco de dados vetorial que eu escolhi foi o &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt;, que permite armazenar e consultar vetores de alta dimensão de forma eficiente e escalável. A ideia é transformar as perguntas e as respostas em vetores usando um modelo de linguagem natural pré-treinado, como o modelo &lt;a href="https://platform.openai.com/docs/models/embeddings"&gt;text-embedding-ada-002&lt;/a&gt;, e depois usar o &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; para encontrar as respostas mais similares às perguntas dos usuários.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Bancos de dados vetoriais podem oferecer melhores resultados de consultas de texto do que os bancos de dados SQL porque eles usam uma representação matemática dos dados chamada vetor, que permite medir a similaridade entre documentos e consultas usando operações como distância ou ângulo. Os bancos de dados SQL, por outro lado, usam uma linguagem de consulta estruturada (SQL) que pode ser limitada para pesquisar correspondências exatas ou difusas de palavras ou frases usando o comando CONTAINS. Além disso, os bancos de dados SQL podem exigir mais recursos e tempo para indexar e pesquisar grandes quantidades de dados de texto do que os bancos de dados vetoriais.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Para gerenciar as perguntas, eu usei o &lt;a href="https://firebase.google.com/docs/firestore?hl=pt-br"&gt;Firebase Firestore&lt;/a&gt;, um banco de dados NoSQL na nuvem que oferece sincronização em tempo real e suporte offline. Cada pergunta é armazenada em um documento com um identificador único e um campo para a resposta. O &lt;a href="https://firebase.google.com/docs/firestore?hl=pt-br"&gt;Firestore&lt;/a&gt; também permite criar funções na nuvem que são acionadas por eventos no banco de dados, como a criação, atualização ou exclusão de documentos. Eu usei uma &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; para disparar um evento sempre que uma nova pergunta for cadastrada no Firestore. Essa função é responsável por enviar a pergunta para a API do &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt;, um serviço que usa o modelo &lt;a href="https://openai.com/pricing"&gt;GPT-3.5-turbo&lt;/a&gt; para gerar respostas conversacionais. O &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; retorna a resposta em formato de texto, que é então convertida em um vetor usando o mesmo modelo de linguagem natural que foi usado para as perguntas. Esse vetor é enviado para o &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt;, que armazena o vetor para futuras novas perguntas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5zemHUtV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A6BRpmbl720xouv8qlbBj9Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5zemHUtV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A6BRpmbl720xouv8qlbBj9Q.png" alt="Infra estrutura na versão 1." width="619" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Uma vantagem de usar o &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; é que ele permite fazer consultas por similaridade usando os vetores das perguntas e das respostas. Assim, quando um usuário faz uma pergunta, eu não preciso enviar a pergunta para o &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; e gastar tokens do GPT. Eu posso simplesmente converter a pergunta em um vetor e enviar para o &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt;, que me retorna os identificadores dos vetores mais similares. Eu posso adotar uma resposta existente como a resposta da pergunta baseado no score de similaridade entre a nova pergunta e as perguntas que já foram feitas anteriormente. Isso reduz significativamente os custos com tokens do GPT, já que eu só preciso usar o &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; para gerar respostas para perguntas novas ou muito diferentes das existentes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pode ser um grande desafio encontrar a resposta exata para a pergunta do usuário com base nas perguntas realizadas anteriormente. Exemplo: ‘Quanto custa 1kg do seu produto?’ ou ‘Quanto custa 1g do seu produto?’. Possivelmente o score do texto vetorizado das duas perguntas terá um score quase de 1.0. Isso pode ser um problema dependendo do seu interesse. Não cheguei a pensar em algo concreto para resolver esse problema, mas acredito que haja formas de contornar isso definindo diferenciações entre algumas palavras tais como ‘kilograma’ e ‘grama’.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Para fazer um comparativo entre o custo entre as consultas do &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; e o modelo &lt;a href="https://openai.com/pricing"&gt;GPT-3.5-turbo&lt;/a&gt;, eu usei os seguintes valores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O preço do &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; é de $0.096 / hora.&lt;/li&gt;
&lt;li&gt;O preço do &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; é de $0.002 / 1K tokens do gpt-3.5-turbo.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Tokens&lt;/em&gt; são sequências comuns de caracteres encontrados no texto. O GPT processa o texto usando tokens e entende as relações estatísticas entre eles. Os tokens podem incluir espaços e até sub-palavras. O número máximo de tokens que o GPT pode receber como entrada depende do modelo e do tokenizador usados. Por exemplo, o modelo text-embedding-ada-002 usa o tokenizador cl100k_base e pode receber até 8191 tokens.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Cheguei a considerar o uso do &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; como plataforma de hospedagem do meu código de consulta ao &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt;, mas me traria altos custos financeiros. Isso porque o &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; cobra de acordo com o tempo de execução da sua função, além do número de invocações e dos recursos provisionados. Como a API do &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; pode demorar para responder, dependendo da complexidade da sua consulta, você pode acabar pagando muito pelo tempo que a sua função fica aguardando a resposta da API.&lt;/p&gt;

&lt;p&gt;Uma forma que encontrei de reduzir os custos com o &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; é usar um web service no &lt;a href="https://render.com/"&gt;Render.com&lt;/a&gt;, onde é possível criar uma aplicação em &lt;a href="https://flask.palletsprojects.com/"&gt;Flask&lt;/a&gt; onde eu consegui utilizar threads para não aguardar a API do GPT responder antes de já responder ao &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt;. O &lt;a href="https://render.com/"&gt;Render.com&lt;/a&gt; é uma plataforma que permite hospedar aplicações web de forma simples e barata, com planos a partir de $7 por mês. A ideia é criar uma camada intermediária entre o &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; e o &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt;, que recebe a pergunta do &lt;a href="https://cloud.google.com/functions/docs/concepts/functions-and-firebase?hl=pt-br"&gt;Cloud Function&lt;/a&gt; e envia uma resposta imediata dizendo que a resposta está sendo gerada. Em seguida, a aplicação em Flask cria uma thread para enviar a pergunta para a API do &lt;a href="https://platform.openai.com/docs/guides/chat"&gt;ChatGPT&lt;/a&gt; e assim que obtiver a resposta, atualize a pergunta no &lt;a href="https://firebase.google.com/docs/firestore?hl=pt-br"&gt;Firestore&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l_YQE0UA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ACYRWTNaKwipU3b2KEEo7fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l_YQE0UA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ACYRWTNaKwipU3b2KEEo7fw.png" alt="Infra estrutura na versão 2." width="634" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Havia pensado em descrever o código fonte do projeto nesse post. Mas acho que por hora, vale a reflexão sobre a infra aplicada. Posteriormente estarei estarei criando novos post para detalhar mais sobre o projeto em NodeJS inserido no Google Cloud Function e o projeto em Python (Flask) hospedado no Render.com.&lt;/p&gt;

&lt;p&gt;A aplicação web responsável por inserir e consultar as perguntas no Firestore ainda está em fase de desenvolvimento. Pretendo publicá-la em breve também.&lt;/p&gt;

</description>
      <category>gpt</category>
      <category>firestore</category>
      <category>googlcloudfunction</category>
    </item>
    <item>
      <title>Consuming a Rails API with a NextJs client</title>
      <dc:creator>Raphael Araújo</dc:creator>
      <pubDate>Mon, 28 Nov 2022 13:08:20 +0000</pubDate>
      <link>https://dev.to/raphox/consuming-a-rails-api-with-a-nextjs-client-581e</link>
      <guid>https://dev.to/raphox/consuming-a-rails-api-with-a-nextjs-client-581e</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;tl;dr;&lt;/strong&gt; Previously &lt;a href="https://dev.to/raphox/rails-7-hotwire-turbo-stimulus-modern-web-applications-4o7a"&gt;here&lt;/a&gt;, I wrote about how to write a modern web application using Rails as a full-stack framework. Just writing HTML and Vanilla Javascript. Now we will go ahead creating a React client App using NextJs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The existing API has the following endpoints:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /api/kit/products(.:format)
POST /api/kit/products(.:format)
GET /api/kit/products/:id(.:format)
PATCH /api/kit/products/:id(.:format)
PUT /api/kit/products/:id(.:format)
DELETE /api/kit/products/:id(.:format)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In the new project, we have the same screen as before:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futt8wo7gjnvm9gp26juc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futt8wo7gjnvm9gp26juc.png" alt="In addition, the screen lists all products, a form filter, and a form to create and manage the products." width="700" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have a lot of customized components:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/components/
├── Form
│   ├── Actions.tsx
│   ├── Input.tsx
│   └── index.tsx
├── Layout
│   ├── Page.tsx
│   ├── Sidebar.tsx
│   └── index.tsx
├── Loading.tsx
├── LoadingOverlay.tsx
├── Notification.tsx
├── Products
│   ├── Form.tsx
│   └── Sidebar.tsx
└── SearchList
    ├── Form.tsx
    ├── ListItem.tsx
    └── index.tsx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;But our pages are simple and short. For example, take a look at the products page code:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h2&gt;
  
  
  Highlights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;NextJs&lt;/a&gt; is just a choice, not a requirement
&lt;/h3&gt;

&lt;p&gt;The API made using Ruby on Rails is completely independent of the Next JS client application developed with NextJs. You could use any RESTfull client application to consume the existing API.&lt;br&gt;
In my &lt;a href="https://github.com/raphox/rails-7-fullstack/tree/nextjs/frontend" rel="noopener noreferrer"&gt;project&lt;/a&gt;, I am using the NextJs project as a subfolder of my Rails repository, but you could put it anywhere.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why &lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;Next JS&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;I already worked with &lt;a href="https://reactrouter.com/" rel="noopener noreferrer"&gt;React Router&lt;/a&gt; and &lt;a href="https://reactnavigation.org/" rel="noopener noreferrer"&gt;React Navigation&lt;/a&gt;, but when I knew the &lt;strong&gt;&lt;a href="https://nextjs.org/docs/api-reference/next/router" rel="noopener noreferrer"&gt;Next/Router&lt;/a&gt;&lt;/strong&gt; and all related features, as the &lt;strong&gt;&lt;a href="https://nextjs.org/docs/api-reference/next/link" rel="noopener noreferrer"&gt;Next/Link&lt;/a&gt;&lt;/strong&gt;, I loved it. We can use partial load and caches. Get more info &lt;a href="https://nextjs.org/docs/api-routes/introduction" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Conventions
&lt;/h3&gt;

&lt;p&gt;You can create your own convention for your own projects. But, in my opinion, it is beneficial to use a convention that is popular and validated in production by many other developers. Like Ruby on Rails, NextJs gives you a directory structure, core resources (link, routes, image, etc.) and rich documentation.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://nextjs.org/docs/basic-features/data-fetching/get-server-side-props" rel="noopener noreferrer"&gt;SSR&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;After to create a few projects using SPA, it doesn’t seem to me a good choice for big projects. So, for now, I am using &lt;a href="https://nextjs.org/docs/basic-features/data-fetching/get-server-side-props" rel="noopener noreferrer"&gt;SSR&lt;/a&gt; with NextJs. The main use of SSR is to improve the SEO, but like this approach to offer the a better UX.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://tanstack.com/query/" rel="noopener noreferrer"&gt;React Query&lt;/a&gt; is the link between the Rails and NextJs
&lt;/h3&gt;

&lt;p&gt;Working together with &lt;a href="https://axios-http.com/" rel="noopener noreferrer"&gt;Axios&lt;/a&gt; (&lt;a href="https://github.com/raphox/rails-7-fullstack/blob/nextjs/frontend/pages/kit/products/services.ts#L9-L41" rel="noopener noreferrer"&gt;my code&lt;/a&gt;), it is a great option to consume the REST API (and &lt;a href="https://tanstack.com/query/v4/docs/graphql" rel="noopener noreferrer"&gt;GraphQL&lt;/a&gt;). You have the access to: isLoading, isError, data, error, and others. It is a very easy way to load data and rescue errors.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  &lt;a href="https://reactjs.org/docs/context.html" rel="noopener noreferrer"&gt;React Context&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;I don’t like &lt;a href="https://react-redux.js.org/" rel="noopener noreferrer"&gt;https://react-redux.js.org/&lt;/a&gt;, it brings a complexity to the project that I don’t think soo is a good thing. But we can use the React Context and React Reducer to offer a store and events to manager states of the application. You can see the it on the project here &lt;a href="https://github.com/raphox/rails-7-fullstack/tree/nextjs/frontend/src/contexts/products" rel="noopener noreferrer"&gt;https://github.com/raphox/rails-7-fullstack/tree/nextjs/frontend/src/contexts/products&lt;/a&gt;.&lt;br&gt;
In my code, I am using a context to share the state between different components on the same screen. The Form component is able to update the product item in the sidebar list.&lt;br&gt;
The &lt;strong&gt;React Query&lt;/strong&gt; also uses a context and we can exchange states between them. After updating a product, we can trigger changes to the product list.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  React components with namespace
&lt;/h3&gt;

&lt;p&gt;It is something that I learned recently. I used it in my project to offer a way to override children of some components and prevent to set many properties throth the parent. Like in the following code:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
In the previous code, we have the namespace &lt;em&gt;SidebarPrimitive&lt;/em&gt; with nested &lt;em&gt;Root&lt;/em&gt;, &lt;em&gt;Header&lt;/em&gt;, and &lt;em&gt;List&lt;/em&gt; components. I am using the &lt;em&gt;Root&lt;/em&gt; component to wrap the content and I am passing the &lt;em&gt;props&lt;/em&gt; to the respected child.

&lt;h3&gt;
  
  
  &lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;Tailwind&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;There are controversies related to it, but trust me, create a project using it and make sure that you don’t like or love it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependencies:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;https://tailwindcss.com/&lt;/a&gt; “Tailwind CSS works by scanning all of your HTML files, JavaScript components, and any other templates for class names, generating the corresponding styles and then writing them to a static CSS file.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.radix-ui.com/docs/primitives/utilities/slot" rel="noopener noreferrer"&gt;https://www.radix-ui.com/docs/primitives/utilities/slot&lt;/a&gt; “Merges its props onto its immediate child.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://tanstack.com/query/" rel="noopener noreferrer"&gt;https://tanstack.com/query/&lt;/a&gt; “Powerful asynchronous state management for TS/JS, React, Solid, Vue and Svelte”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://axios-http.com/docs/intro" rel="noopener noreferrer"&gt;https://axios-http.com&lt;/a&gt;/ “Axios is a simple promise based HTTP client for the browser and node.js. Axios provides a simple to use library in a small package with a very extensible interface.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/lukeed/clsx" rel="noopener noreferrer"&gt;https://github.com/lukeed/clsx&lt;/a&gt; “A tiny (228B) utility for constructing &lt;code&gt;className&lt;/code&gt; strings conditionally.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://lodash.com/" rel="noopener noreferrer"&gt;https://lodash.com/&lt;/a&gt; “A modern JavaScript utility library delivering modularity, performance &amp;amp; extras.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://react-hook-form.com/" rel="noopener noreferrer"&gt;https://react-hook-form.com/&lt;/a&gt; “Performant, flexible and extensible forms with easy-to-use validation”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/jquense/yup" rel="noopener noreferrer"&gt;https://github.com/jquense/yup&lt;/a&gt; “Yup is a schema builder for runtime value parsing and validation. Define a schema, transform a value to match, assert the shape of an existing value, or both.”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The project
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/raphox/rails-7-fullstack/tree/nextjs/frontend" rel="noopener noreferrer"&gt;rails-7-fullstack/frontend at nextjs · raphox/rails-7-fullstack&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  External references:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.typescriptlang.org/" rel="noopener noreferrer"&gt;https://www.typescriptlang.org/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.radix-ui.com/docs/primitives/utilities/slot" rel="noopener noreferrer"&gt;https://www.radix-ui.com&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://reactjs.org/docs/context.html" rel="noopener noreferrer"&gt;https://reactjs.org/docs/context.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://medium.com/@kunukn_95852/react-components-with-namespace-f3d169feaf91" rel="noopener noreferrer"&gt;https://medium.com/@kunukn_95852/react-components-with-namespace-f3d169feaf91&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>testing</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Rails 7 + Hotwire (Turbo + Stimulus) = Modern web applications</title>
      <dc:creator>Raphael Araújo</dc:creator>
      <pubDate>Thu, 24 Nov 2022 12:20:29 +0000</pubDate>
      <link>https://dev.to/raphox/rails-7-hotwire-turbo-stimulus-modern-web-applications-4o7a</link>
      <guid>https://dev.to/raphox/rails-7-hotwire-turbo-stimulus-modern-web-applications-4o7a</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2Am40NSs4t3k3-8mZ7KwYr3Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2Am40NSs4t3k3-8mZ7KwYr3Q.png" alt="The turbo frames with their respective references."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;tl;dr;&lt;/strong&gt; Let’s understand how we can migrate a scaffold generated by the command ‘bin/rails g scaffold Kit::Product name:string’, to an improved and performative screen. Inserting partial load and DOM manipulation writing a very little code in VanillaJS (“almost” without to write Javascript).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Scaffold (Rails 7)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Getting Started with Rails (&lt;a href="https://guides.rubyonrails.org/getting_started.html#creating-a-new-rails-project-installing-rails" rel="noopener noreferrer"&gt;https://guides.rubyonrails.org/getting_started.html#creating-a-new-rails-project-installing-rails&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generating the scaffold (&lt;a href="https://guides.rubyonrails.org/command_line.html#bin-rails-generate" rel="noopener noreferrer"&gt;https://guides.rubyonrails.org/command_line.html#bin-rails-generate&lt;/a&gt;):&lt;br&gt;
Using just one command you will generate all of these files: Model, Migration, Controllers, Routes, Tests, Helpers, and JSON.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    $ **bin/rails generate scaffold HighScore game:string score:integer**
        invoke  active_record
        create    db/migrate/20190416145729_create_high_scores.rb
        create    app/models/high_score.rb
        invoke    test_unit
        create      test/models/high_score_test.rb
        create      test/fixtures/high_scores.yml
        invoke  resource_route
         route    resources :high_scores
        invoke  scaffold_controller
        create    app/controllers/high_scores_controller.rb
        invoke    erb
        create      app/views/high_scores
        create      app/views/high_scores/index.html.erb
        create      app/views/high_scores/edit.html.erb
        create      app/views/high_scores/show.html.erb
        create      app/views/high_scores/new.html.erb
        create      app/views/high_scores/_form.html.erb
        invoke    test_unit
        create      test/controllers/high_scores_controller_test.rb
        create      test/system/high_scores_test.rb
        invoke    helper
        create      app/helpers/high_scores_helper.rb
        invoke      test_unit
        invoke    jbuilder
        create      app/views/high_scores/index.json.jbuilder
        create      app/views/high_scores/show.json.jbuilder
        create      app/views/high_scores/_high_score.json.jbuilder
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After installing Rails, create your project, and generate your scaffold. Let's try to improve the default interface.&lt;/p&gt;

&lt;p&gt;I am using the &lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;Tailwind&lt;/a&gt; to insert styles on the interfaces and the &lt;a href="https://viewcomponent.org/" rel="noopener noreferrer"&gt;ViewComponent&lt;/a&gt; for “creating reusable, testable &amp;amp; encapsulated view components, built to integrate seamlessly with Ruby on Rails”.&lt;/p&gt;

&lt;p&gt;You could see the resulting code at &lt;a href="https://github.com/raphox/rails-7-fullstack" rel="noopener noreferrer"&gt;https://github.com/raphox/rails-7-fullstack&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Turbo (heart of Hotwire)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Hotwire is an alternative approach to building modern web applications without using much JavaScript by sending HTML instead of JSON over the wire.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can see the unique application’s screen in the following image. Yes, this project has just one screen. Let’s try to be simple to focus on the most important things.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AAd3KVV2Q7Nuorl9o3NCq9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AAd3KVV2Q7Nuorl9o3NCq9w.png" alt="The screen has a list of all products, a form filter, and a form to create and manage the products."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this screen, we have two dynamic areas:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;List of products&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Form to create and edit the selected product&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2Am40NSs4t3k3-8mZ7KwYr3Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2Am40NSs4t3k3-8mZ7KwYr3Q.png" alt="The turbo frames with their respective references."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created a PR so you can see the exact part of the code that was changed or inserted to use the &lt;strong&gt;Hotwire&lt;/strong&gt; on your project. You can see this PR on &lt;a href="https://github.com/raphox/rails-7-fullstack/pull/1" rel="noopener noreferrer"&gt;https://github.com/raphox/rails-7-fullstack/pull/1&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The unique screen on the system has a few triggers to call actions or make requests.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Plus button to access the ‘kit/products/new’ page&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search input field&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Products links to access the product’s edit page&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Buttons to update, remove or create products based on the form’s fields data&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2A6rm4NVZ3vYM7NxbFPlvdVg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2A6rm4NVZ3vYM7NxbFPlvdVg.png" alt="The elements that are having their default behavior changed to interact with the turbo frames."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the &lt;strong&gt;Turbo&lt;/strong&gt;, we can override the &lt;strong&gt;click&lt;/strong&gt; event of the plus button to request the HTML of a new product form from the backend and put it in the ‘product_form’ area.&lt;/p&gt;

&lt;p&gt;You don’t need to write Javascript to override the events on your elements, just insert a new attribute ‘turbo_frame’ on your HTML tag. Like here:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AJGjvmgpp_TehMDU0LpUMxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AJGjvmgpp_TehMDU0LpUMxg.png" alt="The link to call route ‘new_kit_product’ is requesting the URL but its result is being put on the ‘turbo_frame#kit_product_form’."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can use the same approach of the plus button in the link of products and prevent reloading the entire screen HTML and assets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2Aeab3jmwE8ARV6ZP7WHnzng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2Aeab3jmwE8ARV6ZP7WHnzng.png" alt="The links to call route ‘edit_kit_product’ is requesting the URL but its result is being put on the ‘turbo_frame#kit_product_form’."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to use a little Javascript to insert the event ‘change’ to request a filtered list of products from the backend.&lt;/p&gt;

&lt;p&gt;In my code, I insert the ‘turbo_frame’ attribute into the form to change the submit action making the request using Turbo. In the next topic, we can see how to use the Stimulus and Javascript controllers to change the event ‘change’ of the input.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AIOLs08X2xm8JtryG_xHULQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AIOLs08X2xm8JtryG_xHULQ.png" alt="The form to call route ‘kit_products’ is requesting the URL but its result is being put on the ‘turbo_frame#list_kit_products’."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we will need to submit the form and refresh the list of products. But we can do this better. We know that the updated list has changed on the created/updated product. So, we need to update the list with this product, instead refresh and query all the list of products from the database. To do this we need to use a lot of the Javascript code or a few actions from the Stimulus.&lt;/p&gt;

&lt;p&gt;You can see about the Javascript controller in the next topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AJJEGWr0-gPj-_hOJPujh8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2260%2F1%2AJJEGWr0-gPj-_hOJPujh8w.png" alt="The form to call route ‘kit_product’ is requesting the URL but its result is being put on the ‘turbo_frame#kit_product_form’."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stimulus (modest JavaScript framework)
&lt;/h2&gt;

&lt;p&gt;You won’t see any difference. But the code savings and ease of integration will help you a lot when inserting new features.&lt;/p&gt;

&lt;p&gt;The objective here is to insert an event on the search input form to make a request when the user changes the input’s value. When it occurs, the system will make a request to get all products filtered by the ‘name’ column. The result will be inserted into the list after it.&lt;/p&gt;

&lt;p&gt;To implement the &lt;strong&gt;Stimulus&lt;/strong&gt; in our current project, we need to follow the following steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Insert the attribute ‘&lt;strong&gt;data-controller=”search-list”&lt;/strong&gt;’ to the wrapper with the form and the list of the products (app/components/search_list_component.html.erb line 1).&lt;br&gt;
 — This attribute is responsible to relate the following code to the DIV element. Get more info here &lt;a href="https://stimulus.hotwired.dev/" rel="noopener noreferrer"&gt;https://stimulus.hotwired.dev/&lt;/a&gt;.&lt;br&gt;
 — You can use the command ‘./bin/rails generate stimulus controllerName’ to generate the Javascript file with the class definition and import it in the ‘app/javascript/controllers/index.js’ automatically.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
&lt;strong&gt;2.&lt;/strong&gt; Insert the attribute ‘&lt;strong&gt;data-search_list_target=”inputSearch”&lt;/strong&gt;’ to the form search field (app/components/search_list_form_component.html.erb line 2).

&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; Insert the action (&lt;strong&gt;click-&amp;gt;search-list#selectItem&lt;/strong&gt;) to activate the currently selected product (app/components/search_list_item_component.rb lines 20).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.&lt;/strong&gt; Insert the attribute ‘&lt;strong&gt;search_list_target=”item”&lt;/strong&gt;’ to each item of the product list (app/components/search_list_item_component.rb lines 21).&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
I am using &lt;a href="https://viewcomponent.org/" rel="noopener noreferrer"&gt;https://viewcomponent.org/&lt;/a&gt; to define the view components instead to use a view or partial. Pay attention to this part &lt;a href="https://viewcomponent.org/#performance" rel="noopener noreferrer"&gt;https://viewcomponent.org/#performance&lt;/a&gt;.

&lt;p&gt;I really liked having this structure in my project:&lt;br&gt;
&lt;a href="https://github.com/raphox/rails-7-fullstack/blob/11a479dafa5c84a1e769721d57792f39914a1204/app/views/kit/products/_list.html.erb#L23-L26" rel="noopener noreferrer"&gt;&lt;strong&gt;rails-7-fullstack/_list.html.erb at 11a479dafa5c84a1e769721d57792f39914a1204 ·…&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach is very similar to the React components or other Javascript Frameworks.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
I created a PR where you can see the exact part of the code I changed or inserted to use the &lt;strong&gt;Stimulus&lt;/strong&gt; on my project. You can see this PR on &lt;a href="https://github.com/raphox/rails-7-fullstack/pull/2" rel="noopener noreferrer"&gt;https://github.com/raphox/rails-7-fullstack/pull/2&lt;/a&gt;.

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>hotwire</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
