<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roshan Sanjeewa Wijesena</title>
    <description>The latest articles on DEV Community by Roshan Sanjeewa Wijesena (@rswijesena).</description>
    <link>https://dev.to/rswijesena</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rswijesena"/>
    <language>en</language>
    <item>
      <title>How to Customize Stubborn Chart Colors in Boomi Flow Using CSS Filters</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Tue, 10 Mar 2026 00:48:07 +0000</pubDate>
      <link>https://dev.to/rswijesena/how-to-customize-stubborn-chart-colors-in-boomi-flow-using-css-filters-18id</link>
      <guid>https://dev.to/rswijesena/how-to-customize-stubborn-chart-colors-in-boomi-flow-using-css-filters-18id</guid>
      <description>&lt;p&gt;If you build dashboards in Boomi Flow, you know how powerful its charting components can be. Powered by Chart.js under the hood, it’s usually straightforward to get a good-looking dashboard up and running. But what happens when you need two charts on the same page—like a Donut Chart and a Bar Chart—to use completely different color palettes, and the platform refuses to cooperate?&lt;/p&gt;

&lt;p&gt;If you’ve tried overriding the --color-chart CSS variables only to watch both charts stubbornly share the exact same global colors, you aren't alone. Here is a breakdown of why this happens and the CSS "hack" you need to fix it.&lt;/p&gt;

&lt;p&gt;The Problem: The Global Variable Trap&lt;br&gt;
In Boomi Flow, chart colors are typically controlled by a set of CSS variables (e.g., --color-chart-1, --color-chart-2) attached to a high-level .flow class.&lt;/p&gt;

&lt;p&gt;When you place a Bar Chart and a Donut Chart on the same page, they both act as children of this .flow parent. Even if you wrap each chart in a custom &lt;/p&gt; (like .mizuho-barchart and .mizuho-donut) and try to assign unique variables to those classes, Boomi's engine often bypasses your local CSS. Instead, the JavaScript engine jumps straight to the root .flow class, grabs the global color palette once, and paints both canvas elements with the exact same brush.

&lt;p&gt;The Solution: The Canvas Filter Hack&lt;br&gt;
Because the charts are rendered onto an HTML , they are essentially flattened images once drawn. Standard CSS variables can't easily change pixels that are already painted.&lt;/p&gt;

&lt;p&gt;Instead of fighting the JavaScript engine for control of the CSS variables, we can let Boomi paint both charts with the same global palette, and then use CSS filter properties to visually shift the colors of the second chart post-render.&lt;/p&gt;

&lt;p&gt;Here is how you do it:&lt;/p&gt;

&lt;p&gt;Step 1: Set Your Base Palette&lt;br&gt;
First, define the global CSS variables for your primary chart (in this case, the Donut Chart). We will use a clean, professional palette of Dark Navy, Coral, Teal, and Medium Blue.&lt;/p&gt;

&lt;p&gt;CSS&lt;br&gt;
/* 1. BASE PALETTE (This colors the Donut Chart) &lt;em&gt;/&lt;br&gt;
.flow {&lt;br&gt;
    --color-chart-1: #033d58 !important; /&lt;/em&gt; Dark Navy &lt;em&gt;/&lt;br&gt;
    --color-chart-2: #ff7c66 !important; /&lt;/em&gt; Coral &lt;em&gt;/&lt;br&gt;
    --color-chart-3: #22c5be !important; /&lt;/em&gt; Teal &lt;em&gt;/&lt;br&gt;
    --color-chart-4: #4a8bd6 !important; /&lt;/em&gt; Medium Blue */&lt;br&gt;
}&lt;br&gt;
Step 2: Override the Second Chart with hue-rotate&lt;br&gt;
Next, we target the specific container for the second chart (the Bar Chart) and apply a CSS filter directly to its canvas.&lt;/p&gt;

&lt;p&gt;The hue-rotate() filter pushes the original colors across the color wheel. We can combine this with saturate and brightness to fine-tune the resulting color so it looks entirely distinct from the Donut Chart.&lt;/p&gt;

&lt;p&gt;CSS&lt;br&gt;
/* 2. BAR CHART OVERRIDE (Dramatically shifts the colors) &lt;em&gt;/&lt;br&gt;
.mizuho-barchart canvas {&lt;br&gt;
    /&lt;/em&gt; hue-rotate(120deg) shifts the Navy base to a rich Emerald Green.&lt;br&gt;
       saturate(1.2) keeps it punchy.&lt;br&gt;
       brightness(1.1) lightens it slightly.&lt;br&gt;
    */&lt;br&gt;
    filter: hue-rotate(120deg) saturate(1.2) brightness(1.1); &lt;br&gt;
}&lt;br&gt;
Adjusting the Formula&lt;br&gt;
Because your Bar Chart is now mathematically linked to the base palette, you can easily change its color profile just by tweaking the degrees:&lt;/p&gt;

&lt;p&gt;0deg: Stays Dark Navy (matches the base).&lt;/p&gt;

&lt;p&gt;120deg: Emerald Green.&lt;/p&gt;

&lt;p&gt;180deg: Warm Bronze.&lt;/p&gt;

&lt;p&gt;240deg: Deep Purple.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
When working with low-code platforms, you sometimes hit a wall where the underlying JavaScript framework overrides your custom CSS. By treating the rendered  as an image and using powerful CSS filters like hue-rotate, you can bypass these limitations and deliver exactly the UI your users (or clients) are asking for.&lt;/p&gt;

</description>
      <category>boomi</category>
    </item>
    <item>
      <title>How to Connect Boomi to ActiveMQ Without Crashing Atom Queues</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Tue, 24 Feb 2026 02:27:54 +0000</pubDate>
      <link>https://dev.to/rswijesena/how-to-connect-boomi-to-activemq-without-crashing-atom-queues-kkp</link>
      <guid>https://dev.to/rswijesena/how-to-connect-boomi-to-activemq-without-crashing-atom-queues-kkp</guid>
      <description>&lt;p&gt;If you are building an integration in Boomi that connects to an external ActiveMQ broker (like version 5.19.x) while also utilizing Boomi’s internal Atom Queues, you might run into a fatal issue: your Atom crashes and refuses to restart.&lt;/p&gt;

&lt;p&gt;Here is a breakdown of why this happens and the exact steps to fix it.&lt;/p&gt;

&lt;p&gt;The Problem: Classpath Collisions&lt;br&gt;
To connect to an external ActiveMQ broker, Boomi requires the specific ActiveMQ client driver (e.g., activemq-client-5.19.1.jar).&lt;/p&gt;

&lt;p&gt;The standard Boomi documentation often suggests dropping custom JAR files directly into the Atom’s userlib or userlib/jms directories. However, if you are also using Atom Queues, this creates a massive problem.&lt;/p&gt;

&lt;p&gt;Boomi’s Atom Queues are powered by an embedded version of ActiveMQ. When you enable them, Boomi automatically loads its own internal ActiveMQ JARs into the JVM. When you manually place your new activemq-client JAR into the userlib folder, Boomi tries to load both versions at startup.&lt;/p&gt;

&lt;p&gt;The result? A fatal classpath collision (usually a java.lang.NoSuchMethodError or LinkageError). The Atom crashes, and the service will not restart until the conflict is resolved.&lt;/p&gt;

&lt;p&gt;The Solution: Connector-Scoped Custom Libraries&lt;br&gt;
To fix this, we need to stop loading the external ActiveMQ JAR globally and instead isolate it so it is only used when your specific JMS Connection is triggered. We do this using Boomi’s Custom Library feature.&lt;/p&gt;

&lt;p&gt;Step 1: Resuscitate the Atom&lt;br&gt;
First, you must remove the conflicting file to get your Atom back online.&lt;/p&gt;

&lt;p&gt;Navigate to your Atom's installation directory on your server.&lt;/p&gt;

&lt;p&gt;Go to the userlib or userlib/jms folder and delete the activemq-client-x.x.x.jar you manually placed there.&lt;/p&gt;

&lt;p&gt;Restart your Boomi Atom service. It should now boot up successfully.&lt;/p&gt;

&lt;p&gt;Step 2: Upload the JAR via the Platform&lt;br&gt;
Stop putting files directly on the server. Let Boomi manage them.&lt;/p&gt;

&lt;p&gt;Log into the Boomi platform.&lt;/p&gt;

&lt;p&gt;Navigate to Settings &amp;gt; Account Information and Setup &amp;gt; Account Libraries.&lt;/p&gt;

&lt;p&gt;Upload your activemq-client-x.x.x.jar file.&lt;/p&gt;

&lt;p&gt;Step 3: Create the Custom Library Component&lt;br&gt;
This is the crucial step where the magic happens.&lt;/p&gt;

&lt;p&gt;Go to the Build tab and create a new Custom Library component.&lt;/p&gt;

&lt;p&gt;Important: Set the Custom Library Type to Connector (Do not set it to General).&lt;/p&gt;

&lt;p&gt;Select your specific JMS Connection from the Connector Type dropdown.&lt;/p&gt;

&lt;p&gt;Select the JAR file you uploaded in Step 2 to include it in this library.&lt;/p&gt;

&lt;p&gt;Step 4: Deploy&lt;br&gt;
Save the Custom Library component.&lt;/p&gt;

&lt;p&gt;Click Create Packaged Component and deploy it to your Atom's environment.&lt;/p&gt;

&lt;p&gt;Why This Works&lt;br&gt;
By setting the Custom Library type specifically to Connector, you are telling the Boomi JVM to load that specific activemq-client JAR only when that exact JMS connection is executing.&lt;/p&gt;

&lt;p&gt;It keeps the JAR completely hidden from the rest of the Atom, preventing any conflicts with Boomi's internal Atom Queues. Your external connection works flawlessly, and your internal queues keep humming along!&lt;/p&gt;

</description>
      <category>backend</category>
      <category>distributedsystems</category>
      <category>java</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AI Chat-Bot with pinecone vector DB</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Tue, 21 May 2024 13:02:20 +0000</pubDate>
      <link>https://dev.to/rswijesena/ai-chat-bot-with-pinecone-vector-db-d7d</link>
      <guid>https://dev.to/rswijesena/ai-chat-bot-with-pinecone-vector-db-d7d</guid>
      <description>&lt;p&gt;Vector Databases like pinecode is a good candidate to store your custom data, that you want to use in your next AI application.&lt;/p&gt;

&lt;p&gt;In the blog post i will be using pinecone vector database which is easy to use and its a cloud native.&lt;/p&gt;

&lt;p&gt;Also i will be using openAI apis as my LLM model.&lt;/p&gt;

&lt;p&gt;First get your pinecone API Key - &lt;a href="https://app.pinecone.io/organizations/-/projects"&gt;https://app.pinecone.io/organizations/-/projects&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need openAI API key as well to call openAI models&lt;/p&gt;

&lt;p&gt;Install below python libs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install langchain
!pip install pinecone-client
!pip install openai
!pip install pypdf
!pip install tiktoken
!pip install langchain-community
%pip install --upgrade --quiet  langchain-pinecone langchain-openai langchain
from langchain.document_loaders import PyPDFDirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.vectorstores import Pinecone
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
from langchain_community.document_loaders import TextLoader
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
import os
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can upload a folder in current workspace and we can upload a any pdf file with your data into that folder. This will be your custom data to train your chat agent&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!mkdir pdfs
loader = PyPDFDirectoryLoader("pdfs")
data = loader.load()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Split the loaded data into smaller chunks before inserting into the vector database&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;text_chunks = text_splitter.split_documents(data)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set your keys&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;os.environ["OPENAI_API_KEY"] = "&amp;lt;Key&amp;gt;"

os.environ["PINECONE_API_KEY"] = "&amp;lt;Key&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use openAIEmbedding to embedded your texts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;embeddings = OpenAIEmbeddings()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use langchain pineconevectorstore module to store data into the pinecone.&lt;br&gt;
Before that make sure that you have created a new index in pinecone database with a namespace, in my case "roshan"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_pinecone import PineconeVectorStore
index = "vectorone"
docsearch = PineconeVectorStore.from_documents(text_chunks, embeddings, index_name=index,namespace='roshan')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you check your pinecone database you should be able to see data&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftie09mfxjwejma6uj9lp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftie09mfxjwejma6uj9lp.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can run your query to ask questions around your uploaded pdf data&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docsearch.as_retriever()
query = "what is Scaled Dot-Product Attention?"
docs = docsearch.similarity_search(query)
llm = OpenAI(temperature=0)
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())
qa.run(query)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can simply build small commandline chatbot to see how its working.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import sys
while True:
  user_input = input(f"Input Prompt:" )
  if user_input == "exit":
    sys.exit()
  if user_input == '':
    continue
  result = qa.run({'query': user_input})
  print(result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhes5yh8b76r3ih6kqiig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhes5yh8b76r3ih6kqiig.png" alt="Image description" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vectordatabase</category>
      <category>ai</category>
      <category>genai</category>
      <category>python</category>
    </item>
    <item>
      <title>How to run LLM modal locally with Hugging-Face 🤗</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Fri, 17 May 2024 05:22:44 +0000</pubDate>
      <link>https://dev.to/rswijesena/how-run-llm-modal-locally-with-hugging-face-8bg</link>
      <guid>https://dev.to/rswijesena/how-run-llm-modal-locally-with-hugging-face-8bg</guid>
      <description>&lt;p&gt;Welcome Back - In this topic i would like to talk about how to download and run any LLM modal into your local machine/environment.&lt;/p&gt;

&lt;p&gt;We again use hugging-face here 🤗. You would need hugging-face API key first.&lt;/p&gt;

&lt;p&gt;Run below code to download google/flan-t5-large in to your local machine, it will take a while and you will see the progress in your jupyter notebook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.llms import HuggingFacePipeline
import torch
from transformers import pipeline,AutoTokenizer,AutoModelForCausalLM,AutoModelForSeq2SeqLM
model_id= "google/flan-t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer,max_length=128)
local_llm = HuggingFacePipeline(pipeline=pipe)

prompt = PromptTemplate(
    input_variables=["name"],
    template="Can you tell me about footballer {name}",
)
chain = LLMChain(prompt=prompt, llm=local_llm)
chain.run("messi")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>huggingface</category>
      <category>python</category>
      <category>learning</category>
    </item>
    <item>
      <title>How to run any LLM model with Hugging-Face 🤗</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Fri, 17 May 2024 04:44:59 +0000</pubDate>
      <link>https://dev.to/rswijesena/how-to-run-any-llm-model-with-hugging-face-4hg3</link>
      <guid>https://dev.to/rswijesena/how-to-run-any-llm-model-with-hugging-face-4hg3</guid>
      <description>&lt;p&gt;Hugging-face 🤗 is a repository to host all the LLM models available in the world. &lt;a href="https://huggingface.co/"&gt;https://huggingface.co/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you go to the models sections of the repo, you would see thousands of models available to download or use as it is.&lt;/p&gt;

&lt;p&gt;Let's get an example to use google/flan-t5-large to generate text2text prompts&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install below python libs
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install huggingface_hub
!pip install transformers
!pip install accelerate
!pip install bitsandbytes
!pip install langchain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Get a huggingface API Key - &lt;a href="https://huggingface.co/settings/tokens"&gt;https://huggingface.co/settings/tokens&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can run below python code now with your Key&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain import PromptTemplate, HuggingFaceHub, LLMChain
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "&amp;lt;HUGGINGFACEKEY&amp;gt;"
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is the good name for a company that makes {product}",
)

chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-large",model_kwargs={"temperature":0.1, "max_length":64}))

chain.run("fruits")

Results from Model = Fruits is a footballer from the United States.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>huggingface</category>
      <category>llm</category>
      <category>genai</category>
    </item>
    <item>
      <title>Google Search with Langchain</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Wed, 15 May 2024 07:14:52 +0000</pubDate>
      <link>https://dev.to/rswijesena/google-search-with-langchain-2i18</link>
      <guid>https://dev.to/rswijesena/google-search-with-langchain-2i18</guid>
      <description>&lt;p&gt;Do you ever want to hook google search API into your GenAI application? you don't need to pay for wrapper services like serpapi, We have langchain wrapper &lt;code&gt;langchain-google-community&lt;/code&gt;for Google search API and its free.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You need to get Google API Key - &lt;a href="https://developers.google.com/custom-search/v1/introduction(https://developers.google.com/custom-search/v1/introduction)"&gt;https://developers.google.com/custom-search/v1/introduction(https://developers.google.com/custom-search/v1/introduction)&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install below python packages in your env&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install langchain
!pip install -U langchain-google-community
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3 Set your google api key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
os.environ["GOOGLE_API_KEY"] = "&amp;lt;YOUR_API_KEY&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4 Below code will help you to search google&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_google_community import GoogleSearchAPIWrapper
from langchain_core.tools import Tool
import os

import inspect

# Get the constructor of GoogleSearchAPIWrapper
constructor = inspect.signature(GoogleSearchAPIWrapper)

# Print the number of parameters it expects
print(f"Number of parameters expected: {len(constructor.parameters)}")

api_key = os.environ.get("GOOGLE_API_KEY")

search = GoogleSearchAPIWrapper()

# Create a Tool object using the GoogleSearchAPIWrapper instance
tool = Tool(
    name="Google Search",
    description="Search Google for recent results.",
    func=search.run,
)

# Use the tool to perform a search
results = tool.run("What is the capital of France?")

print(results)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>genai</category>
      <category>python</category>
      <category>langchain</category>
    </item>
    <item>
      <title>Advance Function Call With Open AI</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Tue, 14 May 2024 10:27:25 +0000</pubDate>
      <link>https://dev.to/rswijesena/advance-function-call-with-open-ai-45na</link>
      <guid>https://dev.to/rswijesena/advance-function-call-with-open-ai-45na</guid>
      <description>&lt;p&gt;Sometimes you may be wondering &lt;em&gt;can I book a flight using chatGPT&lt;/em&gt; or even can I get information about latest flight details for your favourite  holiday destination? The simple answer as of today is &lt;strong&gt;no&lt;/strong&gt;. ChatGPT 3.5 has training data up to 2021 sept, hence it does not know any real time informations as is today.&lt;/p&gt;

&lt;p&gt;But using openAI APIs we can build an generative AI application call an extra API to get latest information like real time flight details and embedded to our genAI app.&lt;/p&gt;

&lt;p&gt;Lets build a GenAI application to get latest flight information from any given destination&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You need to acquire openAI API key. - &lt;a href="https://platform.openai.com/api-keys"&gt;https://platform.openai.com/api-keys&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We need python development environment for this application you can use jupyter notebook or my favourite is Google Colab. - &lt;a href="https://colab.google/"&gt;https://colab.google/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need to install below python packages&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Set your OpenAI API Key in to the python application and set function description
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import openai
from openai import OpenAI

openAIKey="YOUR_OPENAI_APIKEY"

client = OpenAI(api_key=openAIKey)

function_description = [
    {
        "name": "get_flight_info",
        "description": "Get the next flight from delhi to mumbai",
        "parameters": {
            "type": "object",
            "properties": {
                "from": {"type": "string", "description": "The depature airport e.g DEL"},
                "to": {"type": "string", "description": "The destination airport e.g SYD"},
                "date": {"type": "string"},
            },
            "required": ["from", "to"],
        },
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now you can define your prompt, this is the real question that would ask by end-user
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user_prompt = "When's the next flight from Colombo to Sydney? "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now you need to call openAPI to grab origin and destination air ports from user prompt
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response2 = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {
            "role": "user",
            "content": user_prompt
        }
    ],  
    #Add function calling
    functions=function_description,
    function_call="auto" # specify the function call

)
origin = json.loads(response2.choices[0].message.function_call.arguments).get("from")
destination = json.loads(response2.choices[0].message.function_call.arguments).get("to")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;This is the fun part, you can build your own function to call real time flight information API, accepting any parameters. For this example, i would mimic sample response from an API.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
from datetime import datetime, timedelta
def get_flight_info(from_airport, to_airport):
  """Get Flight information between two airports"""
  #Example out put

  flight_info = {
      "from_airport": from_airport,
      "to_airport": to_airport,
      "date": str(datetime.now() + timedelta(hours=2)),
      "airline" : "Qantas",
      "flight_number": "QF466"
  }
  return json.dumps(flight_info)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Now you pass this function back to openAI API
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response3 = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {
            "role": "user",
            "content": user_prompt
        },
        {
            "role": "function",
            "name": "get_flight_info",
            "content": get_flight_info(origin, destination)
        }
    ],  
    #Add function calling
    functions=function_description,
    function_call="auto" # specify the function call

)

response3.choices[0].message.content
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The next flight from Colombo (CMB) to Sydney (SYD) is on May 14, 2024, at 11:30 AM. It is operated by Qantas with flight number QF466.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the code - enjoy &lt;br&gt;
&lt;a href="https://github.com/rswijesena/AI/blob/3bffb41787c1a5a0110993e305005b0ff990d362/Advance_OpenAI_Function.ipynb"&gt;https://github.com/rswijesena/AI/blob/3bffb41787c1a5a0110993e305005b0ff990d362/Advance_OpenAI_Function.ipynb&lt;/a&gt;&lt;/p&gt;

</description>
      <category>openai</category>
      <category>ai</category>
      <category>python</category>
      <category>genai</category>
    </item>
    <item>
      <title>Google Colab With Open AI</title>
      <dc:creator>Roshan Sanjeewa Wijesena</dc:creator>
      <pubDate>Sun, 12 May 2024 02:47:48 +0000</pubDate>
      <link>https://dev.to/rswijesena/google-colab-with-open-ai-2l7j</link>
      <guid>https://dev.to/rswijesena/google-colab-with-open-ai-2l7j</guid>
      <description>&lt;p&gt;Google Colab is a cloud-based Python Jupyter notebook platform, enabling users to execute Python and AI/ML code on Google-hosted CPU/GPU servers. A free version of Google Colab provides access to GPUs for short durations, depending on usage. &lt;/p&gt;

&lt;p&gt;By default, Google Colab includes popular ML Python libraries such as PyTorch, yet notably, OpenAI Python libraries are not visible. To address this, users must first install them on their Google Colab environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;!pip install openai&lt;/code&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
