<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oyemade Oyemaja</title>
    <description>The latest articles on DEV Community by Oyemade Oyemaja (@oyemade).</description>
    <link>https://dev.to/oyemade</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oyemade"/>
    <language>en</language>
    <item>
      <title>Getting started with Google's Multi-modal "Gemini Pro Vision" LLM with Javascript for Beginners</title>
      <dc:creator>Oyemade Oyemaja</dc:creator>
      <pubDate>Fri, 29 Dec 2023 16:02:17 +0000</pubDate>
      <link>https://dev.to/oyemade/getting-started-with-googles-multi-modal-gemini-pro-vision-llm-with-javascript-for-beginners-4144</link>
      <guid>https://dev.to/oyemade/getting-started-with-googles-multi-modal-gemini-pro-vision-llm-with-javascript-for-beginners-4144</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/oyemade/building-ai-powered-applications-w-javascript-using-langchain-js-for-beginners-388b"&gt;Building AI-powered Applications w/ Javascript using Langchain JS for Beginners&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/oyemade/getting-started-w-googles-gemini-pro-llm-using-langchain-js-4o1"&gt;Getting started w/ Google's Gemini Pro LLM using Langchain JS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the previous blog posts ⬆️⬆️⬆️, we've discussed building AI-powered apps with different LLMs like Open AI's GPT and Google's Gemini Pro. One thing they all have in common is that they are text-based models which means that they are designed to understand, interpret, and generate text.&lt;/p&gt;

&lt;p&gt;How do we then create AI-powered apps that need to interact with images? Well this is where multi-modal LLMs come in, they are LLMs that can understand, interpret, and generate not just text, but also other modes of data such as images, audio, and video.&lt;/p&gt;

&lt;p&gt;Google also released a multimodal version of their new Gemini Pro model called &lt;a href="https://ai.google.dev/models/gemini#model_variations"&gt;Gemini Pro Vison&lt;/a&gt; that enables the LLM to interact with images.&lt;/p&gt;

&lt;p&gt;In this blog post, we will be using the newly released Gemini Pro Vision LLM to create a &lt;code&gt;nutrition-fact explainer&lt;/code&gt; app that takes the image of the nutrition fact on an edible product and returns some insights on it.&lt;/p&gt;

&lt;p&gt;Some requirements before we can begin, you need node.js installed on your machine if you do not have it installed you can download it &lt;a href="https://nodejs.org/en/download/current"&gt;here&lt;/a&gt;, we would also need an API key for the &lt;a href="https://js.langchain.com/docs/integrations/platforms/google"&gt;Gemini LLM&lt;/a&gt;. &lt;a href="https://ai.google.dev/tutorials/setup"&gt;Here&lt;/a&gt; is a guide for creating an API key for Gemini.&lt;/p&gt;

&lt;p&gt;To get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to your preferred folder of choice, open it in your code editor, and run &lt;code&gt;npm init -y&lt;/code&gt; in the terminal. A &lt;code&gt;package.json&lt;/code&gt; file would be generated in that folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edit the &lt;code&gt;package.json&lt;/code&gt; file by adding a &lt;code&gt;type&lt;/code&gt; property with a value &lt;code&gt;module&lt;/code&gt; to the JSON object. &lt;code&gt;"type": "module",&lt;/code&gt;.&lt;br&gt;
This indicates that the files in the project should be treated as ECMAScript modules (ESM) by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;npm i @google/generative-ai dotenv&lt;/code&gt; in the terminal to install the required dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create two (2) new files in the current folder, one called &lt;code&gt;index.js&lt;/code&gt; which would house all our code, and another called &lt;code&gt;.env&lt;/code&gt; to store our API key. We can also copy the image files to be analysed into the folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In our &lt;code&gt;.env&lt;/code&gt; file, we need to add our API key. You can do this by typing &lt;code&gt;GOOGLE_API_KEY=YOUR_GOOGLE_API_KEY_GOES_HERE&lt;/code&gt; exactly as shown, replacing &lt;code&gt;YOUR_GOOGLE_API_KEY_GOES_HERE&lt;/code&gt; with your actual Gemini API key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In our &lt;code&gt;index.js&lt;/code&gt; file we would import the following dependencies:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This imports the new Gemini LLM&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;GoogleGenerativeAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@google/generative-ai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This imports fs from node.js, which we &lt;/span&gt;
&lt;span class="c1"&gt;// will use to read the image file&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This helps connect to our .env file&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;dotenv&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dotenv&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;dotenv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;config&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Task: Analyze images of nutritional facts from various products, which feature diverse designs and include detailed nutritional information typically found on food packaging.

Step 1: Image Recognition and Verification

Confirm if each image is a nutritional facts label.
If the image is not a nutritional facts label, return: "Image not recognized as a nutritional facts label."
Step 2: Data Extraction and Nutritional Analysis

Extract and list the key nutritional contents from the label, such as calories, fats, carbohydrates, proteins, vitamins, minerals and other nutritional details.
Step 3: Reporting in a Strict Two-Sentence Format

First Sentence (Nutritional Content Listing): Clearly list the main nutritional contents. Example: "This product contains 200 calories, 10g of fat, 5g of sugar, and 10g of protein per serving."
Second Sentence (Diet-Related Insight): Provide a brief, diet-related insight based on the nutritional content. Example: "Its high protein and low sugar content make it a suitable option for muscle-building diets and those managing blood sugar levels."
Note: The response must strictly adhere to the two-sentence format, focusing first on listing the nutritional contents, followed by a concise dietary insight.
`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Above we created a template variable that &lt;/span&gt;
&lt;span class="c1"&gt;// contains our detailed instructions for the LLM&lt;/span&gt;

&lt;span class="c1"&gt;// Below we create a new instance of the GoogleGenerativeAI&lt;/span&gt;
&lt;span class="c1"&gt;// class and pass in our API key as an argument&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;genAI&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GoogleGenerativeAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GOOGLE_API_KEY&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This function takes a path to a file and a mime type&lt;/span&gt;
&lt;span class="c1"&gt;// and returns a generative part (specific object structure) that &lt;/span&gt;
&lt;span class="c1"&gt;// can be passed to the generative model&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;fileToGenerativePart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mimeType&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;inlineData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;base64&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nx"&gt;mimeType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// We create the model variable by calling the getGenerativeModel method&lt;/span&gt;
&lt;span class="c1"&gt;// on our genAI instance and passing in an object with the model name&lt;/span&gt;
&lt;span class="c1"&gt;// as an argument. Since we want to interact with images, we use the&lt;/span&gt;
&lt;span class="c1"&gt;// gemini-pro-vision model.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;genAI&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getGenerativeModel&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-pro-vision&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// We create our image variable by calling the fileToGenerativePart function&lt;/span&gt;
&lt;span class="c1"&gt;// and passing in the path to our image file and the mime type of the image.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;fileToGenerativePart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./pepsi.jpeg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;image/jpeg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)];&lt;/span&gt;

&lt;span class="c1"&gt;// We call the generateContent method on our model and pass in our template&lt;/span&gt;
&lt;span class="c1"&gt;// and image variables. This returns a promise, which we await and assign&lt;/span&gt;
&lt;span class="c1"&gt;// to the result variable. We then await the response property of the result&lt;/span&gt;
&lt;span class="c1"&gt;// variable and assign it to the response variable. Finally, we await the text&lt;/span&gt;
&lt;span class="c1"&gt;// method of the response variable and assign it to the text variable.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateContent&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Log the text variable to the console.&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Now we would run &lt;code&gt;node index.js&lt;/code&gt; in our terminal. We should then see a result similar to this in our console:&lt;br&gt;
&lt;code&gt;This product contains 150 calories, 0g of fat, 42g of carbohydrates, 41g of sugar, and 0g of protein per 355ml serving.&lt;br&gt;
Given the high sugar content, moderate consumption is advisable for those watching their sugar intake.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is the image that was analyzed &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqceuq50q8sfk7xdxbj9c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqceuq50q8sfk7xdxbj9c.jpg" alt="Pepsi Nutrition Fact" width="744" height="1029"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep in mind that each time you run the model, the results might vary slightly. However, the overall meaning and context should remain consistent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Oyemade/ai-nutrition-fact-explainer-using-gemini-pro-vision-llm"&gt;Here&lt;/a&gt; is a link to the GitHub repo&lt;/p&gt;

&lt;p&gt;Don't hesitate to ask any questions you have in the comments. I'll make sure to respond as quickly as possible.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>ai</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Getting started w/ Google's Gemini Pro LLM using Langchain JS</title>
      <dc:creator>Oyemade Oyemaja</dc:creator>
      <pubDate>Sun, 17 Dec 2023 02:10:24 +0000</pubDate>
      <link>https://dev.to/oyemade/getting-started-w-googles-gemini-pro-llm-using-langchain-js-4o1</link>
      <guid>https://dev.to/oyemade/getting-started-w-googles-gemini-pro-llm-using-langchain-js-4o1</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/oyemade/building-ai-powered-applications-w-javascript-using-langchain-js-for-beginners-388b"&gt;Building AI-powered Applications w/ Javascript using Langchain JS for Beginners&lt;/a&gt;&lt;br&gt;
In the previous blog post ⬆️⬆️⬆️, we talked about getting started with AI-powered apps with Langchain JS, we then went on to build an AI-powered &lt;code&gt;emoji explainer&lt;/code&gt; app that uses the  OpenAI LLM to power it.&lt;/p&gt;

&lt;p&gt;In this blog post, we will be using the newly released &lt;code&gt;Gemini Pro&lt;/code&gt; LLM to create the &lt;code&gt;emoji explainer&lt;/code&gt; app. More details about the Gemini AI Model &lt;a href="https://blog.google/technology/ai/google-gemini-ai/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some requirements before we can begin, you need node.js installed on your machine if you do not have it installed you can download it &lt;a href="https://nodejs.org/en/download/current"&gt;here&lt;/a&gt;, we would also need an API key for the &lt;a href="https://js.langchain.com/docs/integrations/platforms/google"&gt;Gemini LLM&lt;/a&gt;. &lt;a href="https://ai.google.dev/tutorials/setup"&gt;Here&lt;/a&gt; is a guide for creating an API key for Gemini.&lt;/p&gt;

&lt;p&gt;To get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to your preferred folder of choice, open it in your code editor, and run &lt;code&gt;npm init -y&lt;/code&gt; in the terminal. A &lt;code&gt;package.json&lt;/code&gt; file would be generated in that folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edit the &lt;code&gt;package.json&lt;/code&gt; file by adding a &lt;code&gt;type&lt;/code&gt; property with a value &lt;code&gt;module&lt;/code&gt; to the JSON object. &lt;code&gt;"type": "module",&lt;/code&gt;.&lt;br&gt;
This indicates that the files in the project should be treated as ECMAScript modules (ESM) by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;npm i langchain @langchain/google-genai dotenv&lt;/code&gt; in the terminal to install the required dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create two (2) new files in the current folder, one called &lt;code&gt;index.js&lt;/code&gt; which would house all our code, and another called &lt;code&gt;.env&lt;/code&gt; to store our API key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In our &lt;code&gt;.env&lt;/code&gt; file, we need to add our API key. You can do this by typing &lt;code&gt;GOOGLE_API_KEY=YOUR_GOOGLE_API_KEY_GOES_HERE&lt;/code&gt; exactly as shown, replacing &lt;code&gt;YOUR_GOOGLE_API_KEY_GOES_HERE&lt;/code&gt; with your actual Gemini API key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In our &lt;code&gt;index.js&lt;/code&gt; file we would import the following dependencies:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This imports the new Gemini LLM&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ChatGoogleGenerativeAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@langchain/google-genai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This imports the mechanism that helps create the messages&lt;/span&gt;
&lt;span class="c1"&gt;// called `prompts` we send to the LLM&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PromptTemplate&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;langchain/prompts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This imports the tool called `chains` that helps combine &lt;/span&gt;
&lt;span class="c1"&gt;// the model and prompts so we can communicate with the LLM&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LLMChain&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;langchain/chains&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This helps connect to our .env file&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;dotenv&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dotenv&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;dotenv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;config&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Interpret the emotions conveyed by the following emoji(s): {emojis}. Consider each emoji individually and in combination with others in the series. Apply your knowledge of common emoji usage, cultural contexts, and emotional expressions to provide an insightful interpretation. When analyzing multiple emojis, consider the sequence and combination of the emojis to understand any nuanced emotional narrative or sentiment they may collectively express. Additionally, take into account the following considerations:

Emoji Specificity: Identify each emoji and any specific emotions or ideas it typically represents.
Cultural Context: Acknowledge if certain emojis have unique meanings in specific cultural contexts.
Emotional Range: Recognize if emojis express a range of emotions, from positive to negative or neutral.
Sequential Interpretation: When multiple emojis are used, analyze if the sequence changes the overall emotional message.
Complementary and Contrasting Emotions: Note if emojis complement or contrast each other emotionally.
Common Usage Patterns: Reflect on how these emojis are commonly used in digital communication to infer the underlying emotions.
Sarcasm or Irony Detection: Be aware of the possibility of sarcasm or irony in the use of certain emojis.
Emoji Evolution: Consider any recent changes in how these emojis are used or perceived in digital communication.

Based on these guidelines, in one very short sentence, provide a short interpretation of the emotions conveyed by the inputted emoji(s). `&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promptTemplate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PromptTemplate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;inputVariables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emojis&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Above we created a template variable that contains our&lt;/span&gt;
&lt;span class="c1"&gt;// detailed instructions for the LLM, we also added a &lt;/span&gt;
&lt;span class="c1"&gt;// variable {emojis} which would be replaced with the emojis&lt;/span&gt;
&lt;span class="c1"&gt;// passed in at runtime.&lt;/span&gt;
&lt;span class="c1"&gt;// We then create a prompt template from the template and&lt;/span&gt;
&lt;span class="c1"&gt;// input variable.&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="c1"&gt;// We create our model and pass it our model name &lt;/span&gt;
&lt;span class="c1"&gt;// which is `gemini-pro`. Another option is to pass&lt;/span&gt;
&lt;span class="c1"&gt;// `gemini-pro-vision` if we were also sending an image&lt;/span&gt;
&lt;span class="c1"&gt;// in our prompt&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;geminiModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ChatGoogleGenerativeAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;modelName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-pro&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// We then use a chain to combine our LLM with our &lt;/span&gt;
&lt;span class="c1"&gt;// prompt template&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llmChain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LLMChain&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;geminiModel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;promptTemplate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// We then call the chain to communicate with the LLM&lt;/span&gt;
&lt;span class="c1"&gt;// and pass in the emojis we want to be explained.&lt;/span&gt;
&lt;span class="c1"&gt;// Note that the property name `emojis` below must match the&lt;/span&gt;
&lt;span class="c1"&gt;// variable name in the template earlier created.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llmChain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;emojis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;😂🤣&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Log result to the console&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Now we would run &lt;code&gt;node index.js&lt;/code&gt; in our terminal. We should then see a result similar to this in our console:
&lt;code&gt;😂🤣: Laughing out loud with tears of joy, expressing intense amusement or humor.&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep in mind that each time you run the model, the results might vary slightly. However, the overall meaning and context should remain consistent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackblitz.com/edit/stackblitz-starters-ylnbgz?file=index.js"&gt;Here&lt;/a&gt; is a stackblitz with a demo:&lt;br&gt;
&lt;iframe src="https://stackblitz.com/edit/stackblitz-starters-ylnbgz?embed=1&amp;amp;file=index.js" width="100%" height="500"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Don't hesitate to ask any questions you have in the comments. I'll make sure to respond as quickly as possible.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Building AI-powered Applications w/ Javascript using Langchain JS for Beginners</title>
      <dc:creator>Oyemade Oyemaja</dc:creator>
      <pubDate>Sat, 16 Dec 2023 17:56:05 +0000</pubDate>
      <link>https://dev.to/oyemade/building-ai-powered-applications-w-javascript-using-langchain-js-for-beginners-388b</link>
      <guid>https://dev.to/oyemade/building-ai-powered-applications-w-javascript-using-langchain-js-for-beginners-388b</guid>
      <description>&lt;p&gt;AI is here to stay. As developers, it's our responsibility to not only develop applications that help improve the quality of life of the human race but also make sure to do it as quickly as possible by taking advantage of the best technologies around so we can do more with less.&lt;/p&gt;

&lt;p&gt;In this tutorial, I will introduce &lt;a href="https://js.langchain.com/docs/get_started/introduction"&gt;Langchain JS&lt;/a&gt; which is a framework that simplifies building AI-powered applications by providing all the components needed to build them in one place.&lt;/p&gt;

&lt;p&gt;Langchain works with a myriad of LLMs including OpenAI's GPT, Google's Gemini, HuggingFace, and many more. A complete list can be found &lt;a href="https://js.langchain.com/docs/integrations/llms"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Quick start: We will be building an &lt;code&gt;emoji explainer&lt;/code&gt; app to help describe the emotions behind an emoji or series of emojis. &lt;br&gt;
For example, if these emojis are inputted &lt;code&gt;😂🤣&lt;/code&gt; we should return something similar to &lt;code&gt;The emojis 😂🤣 convey a sense of uncontrollable laughter and joy, often used to express extreme amusement or hilarity&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Some requirements before we can begin, you need node.js installed on your machine if you do not have it installed you can download it &lt;a href="https://nodejs.org/en/download/current"&gt;here&lt;/a&gt;, we would also need an API key from the LLM of choice, this case, we would be using &lt;a href="https://js.langchain.com/docs/integrations/llms/openai"&gt;OpenAI&lt;/a&gt;. &lt;a href="https://platform.openai.com/docs/quickstart/account-setup"&gt;Here&lt;/a&gt; is a guide on creating an API key for OpenAI.&lt;/p&gt;

&lt;p&gt;To get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to your preferred folder of choice, open it in your code editor, and run &lt;code&gt;npm init -y&lt;/code&gt; in the terminal. A &lt;code&gt;package.json&lt;/code&gt; file would be generated in that folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edit the &lt;code&gt;package.json&lt;/code&gt; file by adding a &lt;code&gt;type&lt;/code&gt; property with a value &lt;code&gt;module&lt;/code&gt; to the JSON object. &lt;code&gt;"type": "module",&lt;/code&gt;.&lt;br&gt;
This indicates that the files in the project should be treated as ECMAScript modules (ESM) by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;npm i langchain dotenv&lt;/code&gt; in the terminal to install the required dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create two (2) new files in the current folder, one called &lt;code&gt;index.js&lt;/code&gt; which would house all our code, and another called &lt;code&gt;.env&lt;/code&gt; to store our API key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In our &lt;code&gt;.env&lt;/code&gt; file, we need to add our API key. You can do this by typing &lt;code&gt;OPENAI_API_KEY=YOUR_OPEN_AI_KEY_GOES_HERE&lt;/code&gt; exactly as shown, replacing &lt;code&gt;YOUR_OPEN_AI_KEY_GOES_HERE&lt;/code&gt; with your actual OpenAI key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In our &lt;code&gt;index.js&lt;/code&gt; file we would import the following dependencies:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This imports the OpenAI LLM&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;langchain/llms/openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This imports the mechanism that helps create the messages&lt;/span&gt;
&lt;span class="c1"&gt;// called `prompts` we send to the LLM&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PromptTemplate&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;langchain/prompts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This imports the tool called `chains` that helps combine &lt;/span&gt;
&lt;span class="c1"&gt;// the model and prompts so we can communicate with the LLM&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LLMChain&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;langchain/chains&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// This helps connect to our .env file&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;dotenv&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dotenv&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;dotenv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;config&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Interpret the emotions conveyed by the following emoji(s): {emojis}. Consider each emoji individually and in combination with others in the series. Apply your knowledge of common emoji usage, cultural contexts, and emotional expressions to provide an insightful interpretation. When analyzing multiple emojis, consider the sequence and combination of the emojis to understand any nuanced emotional narrative or sentiment they may collectively express. Additionally, take into account the following considerations:

Emoji Specificity: Identify each emoji and any specific emotions or ideas it typically represents.
Cultural Context: Acknowledge if certain emojis have unique meanings in specific cultural contexts.
Emotional Range: Recognize if emojis express a range of emotions, from positive to negative or neutral.
Sequential Interpretation: When multiple emojis are used, analyze if the sequence changes the overall emotional message.
Complementary and Contrasting Emotions: Note if emojis complement or contrast each other emotionally.
Common Usage Patterns: Reflect on how these emojis are commonly used in digital communication to infer the underlying emotions.
Sarcasm or Irony Detection: Be aware of the possibility of sarcasm or irony in the use of certain emojis.
Emoji Evolution: Consider any recent changes in how these emojis are used or perceived in digital communication.

Based on these guidelines, in one very short sentence, provide a short interpretation of the emotions conveyed by the inputted emoji(s). `&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promptTemplate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PromptTemplate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;inputVariables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emojis&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Above we created a template variable that contains our&lt;/span&gt;
&lt;span class="c1"&gt;// detailed instructions for the LLM, we also added a &lt;/span&gt;
&lt;span class="c1"&gt;// variable {emojis} which would be replaced with the emojis&lt;/span&gt;
&lt;span class="c1"&gt;// passed in at runtime.&lt;/span&gt;
&lt;span class="c1"&gt;// We then create a prompt template from the template and&lt;/span&gt;
&lt;span class="c1"&gt;// input variable.&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="c1"&gt;// We create our model and pass it a `temperature` of 0.5&lt;/span&gt;
&lt;span class="c1"&gt;// The `temperature` represents how creative we want &lt;/span&gt;
&lt;span class="c1"&gt;// the model to be and it takes values 0.1 - 1.&lt;/span&gt;
&lt;span class="c1"&gt;// A lower temperature (closer to 0.1) makes the model's&lt;/span&gt;
&lt;span class="c1"&gt;// responses more predictable and conservative. On the &lt;/span&gt;
&lt;span class="c1"&gt;// other hand, a higher temperature (closer to 1) allows for&lt;/span&gt;
&lt;span class="c1"&gt;// more creativity and variation in responses.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;openAIModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// We then use a chain to combine our LLM with our &lt;/span&gt;
&lt;span class="c1"&gt;// prompt template&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llmChain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LLMChain&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;openAIModel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;promptTemplate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// We then call the chain to communicate with the LLM&lt;/span&gt;
&lt;span class="c1"&gt;// and pass in the emojis we want to be explained.&lt;/span&gt;
&lt;span class="c1"&gt;// Note that the property name `emojis` below must match the&lt;/span&gt;
&lt;span class="c1"&gt;// variable name in the template earlier created.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llmChain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;emojis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;😂🤣&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Log result to the console&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Now we would run &lt;code&gt;node index.js&lt;/code&gt; in our terminal. We should then see a result similar to this in our console:
&lt;code&gt;The emojis 😂🤣 convey a sense of extreme amusement and laughter, possibly indicating that the sender found something very funny.&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep in mind that each time you run the model, the results might vary slightly. However, the overall meaning and context should remain consistent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackblitz.com/edit/stackblitz-starters-yyt9t3?file=index.js"&gt;Here&lt;/a&gt; is a stackblitz with a demo:&lt;br&gt;
&lt;iframe src="https://stackblitz.com/edit/stackblitz-starters-yyt9t3?embed=1&amp;amp;file=index.js" width="100%" height="500"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Don't hesitate to ask any questions you have in the comments. I'll make sure to respond as quickly as possible.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Improve performance in an Angular app Using Brotli and Gzip Text Compression</title>
      <dc:creator>Oyemade Oyemaja</dc:creator>
      <pubDate>Mon, 25 Apr 2022 12:14:49 +0000</pubDate>
      <link>https://dev.to/oyemade/improve-performance-in-an-angular-app-using-brotli-and-gzip-text-compression-2p1e</link>
      <guid>https://dev.to/oyemade/improve-performance-in-an-angular-app-using-brotli-and-gzip-text-compression-2p1e</guid>
      <description>&lt;p&gt;Text compression essentially reduces the file size/payload of text-based resources, decreasing the amount of data transferred. A compressed text-based resource is delivered to your browser faster therefore text-based resources should be served with compression to minimise total network bytes.&lt;/p&gt;

&lt;p&gt;Brotli and Gzip are compression algorithms that can significantly improve the overall bundle size of an application by at least 50% thereby improving performance. In this blog post we would look at how we can also generate Brotli and Gzip versions of our bundle so that browsers that support them can request for them or fall back to our non compressed version if they don't support it.&lt;/p&gt;

&lt;p&gt;To achieve this in angular we have to extend the builder used  when building our application for production to allow us run additional webpack plugins during the build process. The plugins needed are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"&lt;code&gt;compression-webpack-plugin&lt;/code&gt;" for our Gzip files&lt;/li&gt;
&lt;li&gt;"&lt;code&gt;brotli-webpack-plugin&lt;/code&gt;" for Brotli files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To implement this technique:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;npm install -D compression-webpack-plugin brotli-webpack-plugin&lt;/code&gt; to install the packages as dev dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a custom webpack config (&lt;code&gt;custom-webpack.config.js&lt;/code&gt;) file in your &lt;code&gt;src&lt;/code&gt; folder that would run our plugins. Paste the following into the file.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const CompressionPlugin = require("compression-webpack-plugin");
const BrotliPlugin = require("brotli-webpack-plugin");

module.exports = {
  plugins: [
    new CompressionPlugin({
      algorithm: "gzip",
    }),
    new BrotliPlugin(),
  ],
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;npm i -D @angular-builders/custom-webpack&lt;/code&gt; this would install the builder that would allow us to run additional webpack configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;code&gt;angular.json&lt;/code&gt; file under the build property replace the value of the builder property and add a new property called options with its value as thus:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...

"builder": "@angular-builders/custom-webpack:browser",
"options": {
    "customWebpackConfig": {
        "path": "src/custom-webpack.config.js",
        "replaceDuplicatePlugins": true
    }
}

....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; Once the above has been completed run &lt;code&gt;ng build&lt;/code&gt;. If it runs successfully, go into the &lt;code&gt;dist/projectName&lt;/code&gt; folder you should find &lt;code&gt;.br&lt;/code&gt; and &lt;code&gt;.gz&lt;/code&gt; versions of all &lt;code&gt;.js&lt;/code&gt; files which should be significantly smaller in size.&lt;/p&gt;

&lt;p&gt;Also note that depending on the server in which the application would be deployed some configuration might be needed to support .br files but it's good to know that most servers support them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BONUS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This technique can applied to all text-based files for the web especially css, with the help of brotli, css files can be compressed up to 70% of its initial size because it has a large number of repeated strings which helps compression. Please note that after compressing these files "Content-Encoding: br" and "Content-Type" headers need to be set on them when deploying them to the server so that the browser understands them.&lt;/p&gt;

&lt;p&gt;I created a javascript project that can help compress files with the brotli algorithm using webpack plugins on &lt;a href="https://github.com/Oyemade/brotli-compression"&gt;github&lt;/a&gt;. Feel free to make use of it to compress your text-based web files.&lt;/p&gt;

&lt;p&gt;Here are some links on how to setup compression on NGINX and APACHE servers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.nginx.com/nginx/admin-guide/web-server/compression/"&gt;Gzip on NGINX&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.nginx.com/nginx/admin-guide/dynamic-modules/brotli/"&gt;Brotli on NGINX&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://httpd.apache.org/docs/2.4/mod/mod_brotli.html"&gt;Brotli on APACHE&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reviewed By: &lt;a href="https://twitter.com/esosanderelias"&gt;Sander Elias&lt;/a&gt;🙏🏽&lt;/p&gt;

</description>
      <category>angular</category>
      <category>performance</category>
      <category>architecture</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Managing table filters between components</title>
      <dc:creator>Oyemade Oyemaja</dc:creator>
      <pubDate>Sun, 16 May 2021 23:45:43 +0000</pubDate>
      <link>https://dev.to/oyemade/managing-table-filters-between-components-3fjh</link>
      <guid>https://dev.to/oyemade/managing-table-filters-between-components-3fjh</guid>
      <description>&lt;p&gt;&lt;strong&gt;TLDR&lt;/strong&gt;; The logic behind this is to pass the table filters as query params when exiting the table route and when trying to re-activate the table route add the {queryParamsHandling="preserve”} directive after the routerLink so that the query params would be preserved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Tables are very important features when it comes to building applications, depending on the type of application, tables can range from small rows and columns of data, to really huge dataset representations in tabular form. Regardless of this, being able to filter through the data on a table is a very important part of the overall user experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq56v5uzcbzrryo7car2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq56v5uzcbzrryo7car2n.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After implementing filters, another issue that arises is how to identify previously applied filters after a component containing a table has been destroyed. Here is a use case: We have a component showing a list of customers in a table with filters and each row ends with an open button. Let’s assume a user of the application decides to filter the customers by name and he’s left with a particular customer and clicks open. In this scenario, the open button takes the user to a new component that gives more insight into the customer's history than can be represented on a table. In the use case above, all filters applied are lost once the user navigates away from the customer's table component and would be set to default when the component is reinitialized which might be a very bad user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible Solutions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using a state management tool to manage the filter state between the components e.g NgRx, Akita, etc.&lt;/li&gt;
&lt;li&gt;Using a BehaviourSubject to store the filter state.&lt;/li&gt;
&lt;li&gt;Pass the filter state as query params&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Implementation of the solution&lt;/strong&gt;&lt;br&gt;
We would be applying the third solution because it is very easy to implement and takes less effort.&lt;br&gt;
Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;When routing to another page from our table component we want to pass our filter values as query params. This can be done from the template by using this syntax: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6zxop32cey3dhun0hyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6zxop32cey3dhun0hyo.png" alt="image"&gt;&lt;/a&gt; &lt;br&gt;
or from the component by: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cbxrw53gj1bkmv77bx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cbxrw53gj1bkmv77bx1.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When trying to route back to the table component, the {queryParamsHandling="preserve”} directive must be set along with the routerLink directive so that the query params on the route can be preserved.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniztqe9g9za6kj1gh2bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniztqe9g9za6kj1gh2bu.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the table component when initializing the filters, they should be set to the value of the query params or an empty string if there is no query param present so that previous filters can be populated. This can be done using this syntax: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlv48hbj49trabgtkck1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlv48hbj49trabgtkck1.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With this setup, you have successfully been able to manage your filter state without any extra setup or state management solution. &lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

&lt;p&gt;Reviewed by&lt;br&gt;
&lt;a href="https://twitter.com/bonnster75" rel="noopener noreferrer"&gt;Bonnie Brennan #ngMom &lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/esosanderelias" rel="noopener noreferrer"&gt;Sander Elias&lt;/a&gt;&lt;/p&gt;

</description>
      <category>angular</category>
      <category>javascript</category>
      <category>typescript</category>
      <category>routing</category>
    </item>
  </channel>
</rss>
