<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sreejinsreenivasan</title>
    <description>The latest articles on DEV Community by sreejinsreenivasan (@sreejinsreenivasan).</description>
    <link>https://dev.to/sreejinsreenivasan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sreejinsreenivasan"/>
    <language>en</language>
    <item>
      <title>Supabase: A Guide to Setting Up Your Local Environment</title>
      <dc:creator>sreejinsreenivasan</dc:creator>
      <pubDate>Thu, 18 Apr 2024 07:24:42 +0000</pubDate>
      <link>https://dev.to/sreejinsreenivasan/supabase-a-guide-to-setting-up-your-local-environment-4cgf</link>
      <guid>https://dev.to/sreejinsreenivasan/supabase-a-guide-to-setting-up-your-local-environment-4cgf</guid>
      <description>&lt;p&gt;Supabase, an open-source alternative to Firebase, offers a comprehensive suite of tools to streamline your development process. In this guide, we'll walk through the steps to set up your local environment with Supabase, empowering you to build and test your applications with ease.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Develop Locally with Supabase?
&lt;/h2&gt;

&lt;p&gt;Before we delve into the setup process, let's explore why developing locally with Supabase is advantageous:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Faster Development:&lt;/strong&gt; Eliminates network latency and internet disruptions, speeding up the development process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easier Collaboration:&lt;/strong&gt; Facilitates seamless collaboration with team members on the same project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective:&lt;/strong&gt; Allows for unlimited local projects, reducing costs compared to using multiple live projects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration in Code:&lt;/strong&gt; Stores table schemas in code for better organization and version control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Work Offline:&lt;/strong&gt; Enables developers to work from anywhere without requiring constant internet access.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By setting up a local development environment with Supabase, you can streamline your workflow, improve productivity, and ensure a more reliable and cost-effective development experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sources
&lt;/h3&gt;

&lt;p&gt;Supabase documentation and developer resources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/docs/guides/cli/getting-started#installing-the-supabase-cli" rel="noopener noreferrer"&gt;Supabase CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/docs/guides/cli/local-development" rel="noopener noreferrer"&gt;Local Development&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/supabase-local-dev" rel="noopener noreferrer"&gt;Supabase Local Dev: migrations, branching, and observability&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before proceeding with setting up the local development environment for Supabase, ensure that the following prerequisites are met:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Node.js and npm:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Your local development environment should have Node.js and npm (the Node.js package manager) installed.&lt;/li&gt;
&lt;li&gt;These dependencies are necessary for managing Supabase-related packages and running the Supabase CLI.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Installed and Running:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Docker is required to run the Supabase stack locally.&lt;/li&gt;
&lt;li&gt;Ensure Docker is installed on your local machine.&lt;/li&gt;
&lt;li&gt;Start Docker to ensure it's running and accessible.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If Docker is not installed, you can download and install it from the official Docker website: &lt;a href="https://www.docker.com/products/docker-desktop" rel="noopener noreferrer"&gt;https://www.docker.com/products/docker-desktop&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once Docker is installed, start the Docker application to ensure it's running in the background&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the Local Development Environment
&lt;/h2&gt;

&lt;p&gt;Now that we've ensured our prerequisites are in place, let's proceed with setting up the local development environment for Supabase: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Supabase CLI: - Open your terminal or command prompt. - Install the Supabase CLI using npm:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install supabase --save-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Initialize Your Project:&lt;/strong&gt;&lt;br&gt;
    - Create a new directory for your project and navigate into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir my-supabase-project
cd my-supabase-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Initialize your project with the Supabase CLI:This will create the necessary files and folders for your Supabase project, such as config.toml where you can customize your local development environment settings.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx supabase init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Start the Supabase Stack:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With your project initialized, you can now start the Supabase stack locally:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx supabase start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This command will pull all necessary docker images and start the docker containers for the Supabase stack. You can access your local Supabase services locally, including the database, API server, and Supabase Studio.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Started supabase local development setup.

  API URL: http://127.0.0.1:54321
  GraphQL URL: http://127.0.0.1:54321/graphql/v1
  S3 Storage URL: http://127.0.0.1:54321/storage/v1/s3
           DB URL: postgresql://postgres:postgres@127.0.0.1:54322/postgres
        Studio URL: http://127.0.0.1:54323
     Inbucket URL: http://127.0.0.1:54324
        JWT secret: super-secret-jwt-token-with-at-least-32-characters-long
        anon key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.
  service_role key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.
     S3 Access Key: 625729a08b95bf1b7ff351a663f3a23c
     S3 Secret Key: 850181e4652dd023b7a98c58ae0d2d34bd4
        S3 Region: local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Access Your Local Development Environment:&lt;/strong&gt;
Once the Supabase stack is running, you can access various components via their respective URLs:
    - &lt;strong&gt;API URL:&lt;/strong&gt; &lt;a href="http://localhost:54321/" rel="noopener noreferrer"&gt;http://localhost:54321&lt;/a&gt;
    - &lt;strong&gt;Database URL:&lt;/strong&gt; &lt;code&gt;postgresql://postgres:postgres@localhost:54322/postgres&lt;/code&gt;
    - &lt;strong&gt;Supabase Studio:&lt;/strong&gt; &lt;a href="http://localhost:54323/" rel="noopener noreferrer"&gt;http://localhost:54323&lt;/a&gt;
    - &lt;strong&gt;Inbucket:&lt;/strong&gt; &lt;a href="http://localhost:54324/" rel="noopener noreferrer"&gt;http://localhost:54324&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Replace Environment Variables (Optional):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;If needed, replace environment variables in your project's configuration files with the appropriate URLs and keys provided by the Supabase CLI.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NEXT_PUBLIC_SUPABASE_URL:&lt;/strong&gt; Set this variable to &lt;code&gt;http://localhost:54321&lt;/code&gt;. This URL points to the local Supabase instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NEXT_PUBLIC_SUPABASE_ANON_KEY:&lt;/strong&gt; Obtain this key from the terminal output when running &lt;code&gt;supabase start&lt;/code&gt;. It is generated dynamically and is specific to your local Supabase instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With these steps completed, your local development environment with Supabase is up and running, ready for you to start building and testing your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up a local environment with Supabase is a straightforward process that empowers developers to build and test their applications with confidence. By following the steps outlined in this guide, you can harness the full potential of Supabase and accelerate your development workflow.&lt;/p&gt;

&lt;p&gt;Happy coding with Supabase!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>supabase</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Retrieval Augmented Generation on Notion with Langchain, Supabase and OpenAI</title>
      <dc:creator>sreejinsreenivasan</dc:creator>
      <pubDate>Wed, 28 Feb 2024 07:08:39 +0000</pubDate>
      <link>https://dev.to/sreejinsreenivasan/rag-on-notion-with-supabase-and-openai-4gie</link>
      <guid>https://dev.to/sreejinsreenivasan/rag-on-notion-with-supabase-and-openai-4gie</guid>
      <description>&lt;p&gt;RAG, or Retrieval Augmented Generation, is a prominent AI framework in the era of large language models (LLMs). It enhances the capabilities of these models by integrating external knowledge, ensuring more accurate and current responses. A standard RAG system includes an LLM, a vector database, and some prompts as code that can be send queries to the LLM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fg7vxx2s1kt2h72y1ax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fg7vxx2s1kt2h72y1ax.png" alt="source:zilliz" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose and Goals:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Learn How to Vectorise Notion Pages and Databases:&lt;/strong&gt; We'll delve into the process of vectorising Notion pages and databases, enabling efficient storage and retrieval of information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Introduction to Retrieval Augmented Generation (RAG) using LangChain:&lt;/strong&gt; We'll provide an overview of RAG and demonstrate its implementation using LangChain, a powerful tool for integrating external knowledge into AI models&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this guide, we'll explore three important tools: Notion, Supabase, and OpenAI. Notion data is the external information we are going to ingest. Supabase stores this information in a vector representation to act as the context to the LLM. First, we'll set up these tools step by step. Then, we'll learn how to take information from Notion and store it in a Vector Store. After that, we'll use LangChain to build a knowledge retrieval  system that can find the right answers for us. Finally, we'll see how all these pieces work together in a real situation. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you're interested in learning more about prompt engineering and its importance in shaping the responses of AI models, check out my previous post:&lt;a href="https://www.sreejin.blog/blog/prompt-engineering-for-developers" rel="noopener noreferrer"&gt;Prompt Engineering for OpenAI Chat Completions&lt;/a&gt;. In this beginner's guide, I delve into the significance of crafting well-designed prompts to enhance the accuracy and relevance of AI-generated responses. Understanding prompt engineering can greatly improve your experience with AI models like OpenAI's Chat Completions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before diving into the implementation, let's ensure we have all the necessary requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Notion Database&lt;/strong&gt;: A table database in Notion with appropriate records.&lt;/li&gt;
&lt;li&gt; Visit the &lt;a href="https://www.notion.com/my-integrations" rel="noopener noreferrer"&gt;Notion Developers&lt;/a&gt; page and log in with your Notion account.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notion Integration&lt;/strong&gt;: Generate an integration token in Notion to access the database programmatically.&lt;/li&gt;
&lt;li&gt;Connect the Integration to the Database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supabase Setup&lt;/strong&gt;: Set up a Supabase project and obtain the Supabase URL and service key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment Variables&lt;/strong&gt;: Set environment variables for the OpenAI API key, Supabase URL, and service key.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-openai-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SUPABASE_URL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-supabase-url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SUPABASE_SERVICE_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-supabase-service-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NOTION_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-notion-integration-token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DATABASE_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-notion-database-id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Setup Supabase Database&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Use these steps to setup your Supabase database if you haven't already.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Head over to &lt;a href="https://database.new/" rel="noopener noreferrer"&gt;https://database.new&lt;/a&gt; to provision your Supabase database.&lt;/li&gt;
&lt;li&gt;In the studio, jump to the &lt;a href="https://supabase.com/dashboard/project/_/sql/new" rel="noopener noreferrer"&gt;SQL editor&lt;/a&gt; and run the following script to enable &lt;code&gt;pgvector&lt;/code&gt; and setup your database as a vector store:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Enable the pgvector extension to work with embedding vectors&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="n"&gt;extension&lt;/span&gt; &lt;span class="n"&gt;if&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;exists&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Create a table to store your documents&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt;
  &lt;span class="n"&gt;documents&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt; &lt;span class="k"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;-- corresponds to Document.pageContent&lt;/span&gt;
    &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="n"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;-- corresponds to Document.metadata&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1536&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;-- 1536 works for OpenAI embeddings, change as needed&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Create a function to search for documents&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;match_documents&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;query_embedding&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1536&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="n"&gt;filter&lt;/span&gt; &lt;span class="n"&gt;jsonb&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="s1"&gt;'{}'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="n"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;similarity&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;language&lt;/span&gt; &lt;span class="n"&gt;plpgsql&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="err"&gt;$$&lt;/span&gt;
&lt;span class="o"&gt;#&lt;/span&gt;&lt;span class="n"&gt;variable_conflict&lt;/span&gt; &lt;span class="n"&gt;use_column&lt;/span&gt;
&lt;span class="k"&gt;begin&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;
  &lt;span class="k"&gt;select&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;query_embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;similarity&lt;/span&gt;
  &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;
  &lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;@&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;filter&lt;/span&gt;
  &lt;span class="k"&gt;order&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;query_embedding&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt;$$&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Loading Notion Documents with NotionDBLoader&lt;/strong&gt;
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The following code only loads documents from a single Notion database. You can export your entire Notion workspace and load documents using NotionDirectoryLoader, see the ingesting-your-own-dataset from Langchain for more details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We'll start by loading documents from a Notion database using the &lt;code&gt;NotionDBLoader&lt;/code&gt; class. This class retrieves pages from the database, reads their content, and returns a list of Document objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.document_loaders&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;NotionDBLoader&lt;/span&gt;

&lt;span class="n"&gt;NOTION_TOKEN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NOTION_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;DATABASE_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DATABASE_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;loader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;NotionDBLoader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;integration_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;NOTION_TOKEN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;database_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DATABASE_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;request_timeout_sec&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;  &lt;span class="c1"&gt;# Optional, defaults to 10
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Storing Documents with SupabaseVectorStore&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Next, we'll store the retrieved documents in Supabase using the &lt;code&gt;SupabaseVectorStore&lt;/code&gt;. This component enables efficient storage and retrieval of indexed documents in Supabase.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SupabaseVectorStore&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;supabase.client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;create_client&lt;/span&gt;

&lt;span class="n"&gt;SUPABASE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SUPABASE_URL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;SUPABASE_SERVICE_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SUPABASE_SERVICE_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SUPABASE_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SUPABASE_SERVICE_KEY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;vector_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;SupabaseVectorStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;documents&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;query_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;match_documents&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, we've created a &lt;code&gt;SupabaseVectorStore&lt;/code&gt; from the retrieved documents. The &lt;code&gt;from_documents&lt;/code&gt; method takes the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docs&lt;/code&gt;: A list of Document objects to be stored in the vector store.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;embeddings&lt;/code&gt;: An instance of the &lt;code&gt;OpenAIEmbeddings&lt;/code&gt; class, which provides methods for generating embeddings from text.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;client&lt;/code&gt;: A Supabase client instance for interacting with the database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;table_name&lt;/code&gt;: The name of the table in the Supabase database where the documents will be stored.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;query_name&lt;/code&gt;: The name of the function in the database that will be used for document retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;chunk_size&lt;/code&gt;: The number of documents to be stored in each batch.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_retriever&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_relevant_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NotionFlow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;Document&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;page_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'Project Overview:\n\nNotionFlow is a comprehensive automation tool designed to streamline workflows, enhance productivity, and optimize resource allocation using Notion\'&lt;/span&gt;s versatile database and collaboration features.&lt;span class="o"&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we've queried the vector store with the input "NotionFlow" and retrieved a relevant document containing information about the project overview.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Performing Retrieval with OpenAI&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With the documents stored in Supabase, we can leverage OpenAI's powerful language models for advanced processing tasks. We'll use the &lt;code&gt;ChatOpenAI&lt;/code&gt; class to interact with the OpenAI model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.output_parsers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StrOutputParser&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.runnables&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RunnablePassthrough&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hub&lt;/span&gt;

&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;hub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pull&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rlm/rag-prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;format_docs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;n&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;rag_chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;format_docs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RunnablePassthrough&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nc"&gt;StrOutputParser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down the RAG pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Retriever&lt;/strong&gt;: The &lt;code&gt;retriever&lt;/code&gt; component retrieves documents from the Supabase vector store based on the input query.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt&lt;/strong&gt;: The &lt;code&gt;prompt&lt;/code&gt; component provides a structured prompt for the OpenAI language model, guiding it to generate a response based on the retrieved documents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM&lt;/strong&gt;: The &lt;code&gt;llm&lt;/code&gt; component represents the large language model (LLM) from OpenAI, which processes the prompt and generates a response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Parser&lt;/strong&gt;: The &lt;code&gt;StrOutputParser&lt;/code&gt; component parses the output from the LLM and formats it as a string for further processing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's query the RAG pipeline with a sample input and retrieve the generated response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;rag_chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What are the main features of NotionFlow?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NotionFlow features include automation triggers &lt;span class="k"&gt;for &lt;/span&gt;project creation, task completion notifications, content idea generation, content calendar updates, goal progress tracking, cross-database relationships, bug report handling, and customer feedback integration. The tool is designed to streamline workflows, enhance productivity, and improve resource allocation through automation and integration with Notion&lt;span class="s1"&gt;'s collaboration features. Its benefits include streamlining project management processes, enhancing collaboration and communication, and enabling data-driven decision-making.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this tutorial, we've demonstrated how to build a RAG pipeline using LangChain, OpenAI, and Supabase. By combining the capabilities of these tools we can query a custom knowledge base and generate responses based on the retrieved documents. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resources&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/get_started/introduction" rel="noopener noreferrer"&gt;LangChain Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query#creating-a-supabase-vector-store" rel="noopener noreferrer"&gt;Creating a Supabase vector store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/templates/self-query-supabase#environment-setup" rel="noopener noreferrer"&gt;self-query-supabase&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/integrations/document_loaders/notion#instructions-for-ingesting-your-own-dataset" rel="noopener noreferrer"&gt;NotionDirectoryLoader&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/integrations/document_loaders/notiondb" rel="noopener noreferrer"&gt;NotionDBLoader&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Next Steps&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The RAG pipeline we've built is a basic example of how to ingest and retrieve documents from a knowledge base. You can further enhance the pipeline by ingesting more documents, fine tuning the chunk size, and experimenting with different prompts to generate more accurate responses. Additionally, you can explore other components and integrations available in LangChain to build more advanced language processing pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stay Connected
&lt;/h3&gt;

&lt;p&gt;Join me on LinkedIn, where I share insights and updates on AI, Automation, Productivity, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/sreejin-sreenivasan/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, if you're interested in learning more about how I'm leveraging AI for simple automations and productivity hacks, subscribe to my newsletter "Growth Journal". Be the first to receive exclusive content and stay up-to-date with the latest trends in AI and automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sreejin.substack.com/?utm_source=navbar&amp;amp;utm_medium=web&amp;amp;r=8ikhn" rel="noopener noreferrer"&gt;Subscribe to my newsletter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Until next time, happy prompting!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Prompt Engineering for OpenAI Chat Completions</title>
      <dc:creator>sreejinsreenivasan</dc:creator>
      <pubDate>Mon, 19 Feb 2024 11:47:57 +0000</pubDate>
      <link>https://dev.to/sreejinsreenivasan/prompt-engineering-for-openai-chat-completions-4j70</link>
      <guid>https://dev.to/sreejinsreenivasan/prompt-engineering-for-openai-chat-completions-4j70</guid>
      <description>&lt;p&gt;A prompt, essentially a piece of text, is what we offer to the model to steer its response. The quality and clarity of the prompt directly influence the accuracy and relevance of the generated responses. With well-crafted prompts, developers can shape the model's understanding, context, and expected output, leading to more precise and useful responses while saving time and effort in the interaction process.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Setting Up&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To get started with prompting, you'll need to set up the OpenAI Python library. Install the library using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the library is installed, the next step is to import the OpenAI library and set your API key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Quickstart&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's dive right in with a simple example. Suppose we want a joke about cats. Our prompt could be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Tell me a joke about cats&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the OpenAI library, we can obtain a response from the model based on this prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Why don't cats play poker in the wild? Too many cheetahs!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! We've successfully obtained a joke about cats using a simple prompt.&lt;/p&gt;

&lt;p&gt;Let's enhance our prompt dynamically by incorporating a placeholder for the topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;topic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dog&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Tell me a joke about {topic}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What is happening here is that we are using the &lt;code&gt;format&lt;/code&gt; method to replace the placeholder &lt;code&gt;{topic}&lt;/code&gt; with the value of the &lt;code&gt;topic&lt;/code&gt; variable. This results in the following prompt:&lt;/p&gt;

&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; &lt;span class="s2"&gt;"Tell me a joke about dog"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A helper function can be created to simplify the process of obtaining chat completions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_chat_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can obtain a response based on this dynamic prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;get_chat_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s2"&gt;"Why did the dog sit in the shade? Because he didn't want to be a hot dog!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hilarious!&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Fine-tuning Model Parameters&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenAI allows control over model parameters such as temperature and top-p, influencing the randomness and creativity of the generated responses. Lower temperature values result in more accurate responses, while higher values foster creativity.&lt;/p&gt;

&lt;p&gt;Let's modify our function to include a temperature parameter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_chat_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Increasing the temperature may produce different responses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;get_chat_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s2"&gt;"Why did the cat sit on the computer? Because it wanted to keep an eye on the mouse!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many other parameters that can be used to fine-tune the model's behavior, such as &lt;code&gt;max_tokens&lt;/code&gt;, &lt;code&gt;top_p&lt;/code&gt;, and &lt;code&gt;presence_penalty&lt;/code&gt; etc.&lt;/p&gt;

&lt;p&gt;Use &lt;code&gt;max_tokens&lt;/code&gt; to control the length of the response. For example, setting &lt;code&gt;max_tokens=20&lt;/code&gt; will limit the response to 20 tokens. This is useful for controlling the prevent the model from generating overly long responses, and save resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Elements of a Good Prompt&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A good prompt is clear, concise, and specific, providing enough context for the model to understand the user's intent and generate a relevant response. Here are key elements to consider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Instruction&lt;/strong&gt;: Clearly state the task or instruction you want the model to perform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context&lt;/strong&gt;: Provide external information or additional context to steer the model towards better responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input Data&lt;/strong&gt;: Offer the input or question for which you seek a response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Indicator&lt;/strong&gt;: Specify the type or format of the desired output.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Crafting Complex Prompts&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This examples are very basic. For generating more complex and specific responses, we can use more advanced prompts and context to guide the model's understanding.&lt;/p&gt;

&lt;p&gt;Let's say I want to generate a weekly plan based on specific goals.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Efficiently plan the week to achieve all goals while maximizing productivity and allowing time for relaxation.&lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s"&gt;
Goals of the week:&lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s"&gt;
1. Complete development sprint for project X.&lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s"&gt;2. Lead team meeting to discuss project updates and assign tasks.&lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s"&gt;3. Research and prototype innovative features for product Y.&lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s"&gt;4. Attend networking event to expand professional network.&lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s"&gt;5. Review industry trends and advancements for potential business opportunities.&lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;let's see what the model comes up with for this prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;get_chat_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;To efficiently plan the week and achieve all goals &lt;span class="k"&gt;while &lt;/span&gt;maximizing productivity and allowing &lt;span class="nb"&gt;time &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;relaxation, consider the following schedule:

Monday:
- Start the day by reviewing project X and outlining tasks &lt;span class="k"&gt;for &lt;/span&gt;the development sprint.
- Lead team meeting &lt;span class="k"&gt;in &lt;/span&gt;the afternoon to discuss project updates and assign tasks accordingly.
- Dedicate &lt;span class="nb"&gt;time &lt;/span&gt;&lt;span class="k"&gt;in &lt;/span&gt;the evening to research and prototype innovative features &lt;span class="k"&gt;for &lt;/span&gt;product Y.

Tuesday:
- Focus on completing tasks &lt;span class="k"&gt;for &lt;/span&gt;project X development sprint.
- Attend the networking event &lt;span class="k"&gt;in &lt;/span&gt;the evening to &lt;span class="nb"&gt;expand &lt;/span&gt;professional network.

Wednesday:
- Review industry trends and advancements &lt;span class="k"&gt;in &lt;/span&gt;the morning &lt;span class="k"&gt;for &lt;/span&gt;potential business opportunities.
- Continue working on project X development sprint &lt;span class="k"&gt;in &lt;/span&gt;the afternoon.
- Allocate &lt;span class="nb"&gt;time &lt;/span&gt;&lt;span class="k"&gt;in &lt;/span&gt;the evening &lt;span class="k"&gt;for &lt;/span&gt;relaxation and self-care.

Thursday:
- Finalize development sprint &lt;span class="k"&gt;for &lt;/span&gt;project X and prepare &lt;span class="k"&gt;for &lt;/span&gt;presentation or review.
- Work on prototyping innovative features &lt;span class="k"&gt;for &lt;/span&gt;product Y &lt;span class="k"&gt;in &lt;/span&gt;the afternoon.

Friday:
- Present completed development sprint &lt;span class="k"&gt;for &lt;/span&gt;project X to the team.
- Follow up with any additional tasks or updates from the team meeting.
- Reflect on the week&lt;span class="s1"&gt;'s accomplishments and plan for the following week.

Throughout the week, ensure to prioritize tasks, allocate time for breaks and relaxation, and maintain open communication with the team. By organizing tasks and allocating time effectively, you can achieve all goals while also maintaining a healthy work-life balance.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's nice! The model has generated a detailed plan for the week based on the provided goals. But the generated output does have a chatgpt conversational style to it.&lt;/p&gt;

&lt;p&gt;The Prompt does provide a clear instruction, context and input data. But in order to get a more structured output, we can provide an output indicator to the prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Few Shot Prompting&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Few-shot learning is a technique that allows models to learn from a small number of examples or instructions. This can be particularly useful when you want to guide the model's understanding with specific examples or context.&lt;/p&gt;

&lt;p&gt;Let's modify the prompt to include an output indicator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Efficiently plan the week to achieve all goals while maximizing productivity and allowing time for relaxation.

Goals of the week:
1. Generate an end-to-end plan for automating task creation in Notion using LLMs.
2. Conduct necessary research before diving into automation.
3. Consolidate the plan and generate sprints and tasks to complete the automation.
4. Learn Prompt Engineering for developers using OpenAI.
5. Complete development sprint for project X.
6. Lead team meeting to discuss project updates and assign tasks.
7. Research and prototype innovative features for product Y.
8. Attend networking event to expand professional network.
9. Review industry trends and advancements for potential business opportunities.

Output Format:

9:00 AM - 10:30 AM: Task 1
10:30 AM - 11:00 AM: Break
11:00 AM - 12:30 PM: Task 2
12:30 PM - 1:30 PM: Lunch
1:30 PM - 3:00 PM: Task 3
3:00 PM - 3:30 PM: Break
3:30 PM - 5:00 PM: Task 4

Example Outputs:
Monday, 12/02/2024

- 9:00 AM - 10:30 AM: Brainstorming session for innovative Notion automation ideas
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Research session on latest AI advancements for task automation
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Prototyping session for Notion automation features
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Review and refine Notion automation plan
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's see what the model comes up this time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;**&lt;/span&gt;Tuesday, 12/03/2024&lt;span class="k"&gt;**&lt;/span&gt;

- 9:00 AM - 10:30 AM: Learn Prompt Engineering &lt;span class="k"&gt;for &lt;/span&gt;developers using OpenAI
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Continue learning Prompt Engineering &lt;span class="k"&gt;for &lt;/span&gt;developers
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Apply Prompt Engineering knowledge to project work
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Attend team meeting to discuss project updates and assign tasks

&lt;span class="k"&gt;**&lt;/span&gt;Wednesday, 12/04/2024&lt;span class="k"&gt;**&lt;/span&gt;

- 9:00 AM - 10:30 AM: Research and prototype innovative features &lt;span class="k"&gt;for &lt;/span&gt;product Y
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Continue research and prototyping &lt;span class="k"&gt;for &lt;/span&gt;product Y
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Networking event to &lt;span class="nb"&gt;expand &lt;/span&gt;professional network
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Review industry trends and advancements &lt;span class="k"&gt;for &lt;/span&gt;potential business opportunities

&lt;span class="k"&gt;**&lt;/span&gt;Thursday, 12/05/2024&lt;span class="k"&gt;**&lt;/span&gt;

- 9:00 AM - 10:30 AM: Generate an end-to-end plan &lt;span class="k"&gt;for &lt;/span&gt;automating task creation &lt;span class="k"&gt;in &lt;/span&gt;Notion using LLMs
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Conduct necessary research before diving into automation
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Consolidate the plan and generate sprints and tasks to &lt;span class="nb"&gt;complete &lt;/span&gt;the automation
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Continue working on automation plan and tasks

&lt;span class="k"&gt;**&lt;/span&gt;Friday, 12/06/2024&lt;span class="k"&gt;**&lt;/span&gt;

- 9:00 AM - 10:30 AM: Complete development sprint &lt;span class="k"&gt;for &lt;/span&gt;project X
- 10:30 AM - 11:00 AM: Break
- 11:00 AM - 12:30 PM: Continue development sprint &lt;span class="k"&gt;for &lt;/span&gt;project X
- 12:30 PM - 1:30 PM: Lunch
- 1:30 PM - 3:00 PM: Finish up any pending tasks &lt;span class="k"&gt;for &lt;/span&gt;project X
- 3:00 PM - 3:30 PM: Break
- 3:30 PM - 5:00 PM: Wrap up any remaining tasks and prepare &lt;span class="k"&gt;for &lt;/span&gt;next week&lt;span class="s1"&gt;'s goals
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that's some detailed planning! The model has generated a weekly schedule based on the specific goals provided in the prompt. This demonstrates how prompts can be used to guide the model's understanding and generate specific, relevant outputs.&lt;/p&gt;

&lt;p&gt;We successfully used few-shot learning to guide the model's understanding and generate specific outputs based on the provided examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion and What's Next&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are other advanced techniques such chain-of-thought (CoT), and Tree of Thoughts (ToT) etc, which can enhance the capabilities of these models for a wide range of applications, from simple conversations to complex problem-solving tasks.&lt;/p&gt;

&lt;p&gt;I encourage you to read the source material at &lt;a href="https://www.promptingguide.ai/techniques" rel="noopener noreferrer"&gt;PromptingGuide.ai&lt;/a&gt; To delve deeper into prompting techniques.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Stay Connected&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Join me on LinkedIn, where I share insights and updates on AI, Automation, Productivity, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/sreejin-sreenivasan/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, if you're interested in learning more about how I'm leveraging AI for simple automations and productivity hacks, subscribe to my newsletter "Growth Journal". Be the first to receive exclusive content and stay up-to-date with the latest trends in AI and automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sreejin.substack.com/?utm_source=navbar&amp;amp;utm_medium=web&amp;amp;r=8ikhn" rel="noopener noreferrer"&gt;Subscribe to my newsletter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Until next time, happy prompting!&lt;/p&gt;

</description>
      <category>llm</category>
      <category>openai</category>
      <category>promptengineering</category>
    </item>
  </channel>
</rss>
