<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tanya rai</title>
    <description>The latest articles on DEV Community by tanya rai (@tanyarai).</description>
    <link>https://dev.to/tanyarai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tanyarai"/>
    <language>en</language>
    <item>
      <title>Prototype to Prod: Any model, any modality 🔥</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Sat, 20 Jan 2024 19:30:48 +0000</pubDate>
      <link>https://dev.to/tanyarai/prototype-to-prod-any-model-any-modality-43n0</link>
      <guid>https://dev.to/tanyarai/prototype-to-prod-any-model-any-modality-43n0</guid>
      <description>&lt;h1&gt;
  
  
  AIConfig
&lt;/h1&gt;

&lt;p&gt;AIConfig is an open-source platform to go from prototype to production with any model, any modality. &lt;/p&gt;

&lt;p&gt;Start with your prompts in - AIConfig Editor, a locally hosted AI playground.&lt;/p&gt;

&lt;p&gt;Prompts save to an aiconfig.json that you can use in code with 'config.run('prompt_1')'. &lt;/p&gt;

&lt;p&gt;Open AIConfig editor when you want to edit and visually see outputs for your prompts. &lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Editor for Prompts:&lt;/strong&gt; Prototype and quickly iterate on your prompts and model settings with AIConfig Editor - local playground.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model-agnostic and multimodal SDK:&lt;/strong&gt; Python &amp;amp; Node SDKs to use aiconfig in your application code. AIConfig is designed to be model-agnostic and multi-modal, so you can extend it to work with any generative AI model, including text, image and audio.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompts as Configs:&lt;/strong&gt; standardized JSON format to store prompts and model settings in source control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bring your own model:&lt;/strong&gt; Extend AIConfig to work with any model and your own endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaborative Development:&lt;/strong&gt; AIConfig enables different people to work on prompts and app development, and collaborate together by sharing the aiconfig artifact.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Get Started in Seconds!
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Star our repo on Github: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;pip3 install python-aiconfig &lt;/li&gt;
&lt;li&gt;export OPENAI_API_KEY= 'your-key-here' &lt;/li&gt;
&lt;li&gt;aiconfig edit &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Launches AI Playground for you!&lt;/strong&gt; Make sure that you specify your API keys for model providers (e.g. OpenAI, HuggingFace, Anyscale) as env variables. &lt;/p&gt;

&lt;p&gt;Bring your own self-hosted/fine tuned model by writing a model parser so editor can support it! &lt;/p&gt;

&lt;p&gt;Join our Discord to give us feedback and chat with us: &lt;a href="https://discord.com/invite/xBhNKTetGx" rel="noopener noreferrer"&gt;https://discord.com/invite/xBhNKTetGx&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>programming</category>
      <category>opensource</category>
      <category>ai</category>
    </item>
    <item>
      <title>AI Playground you can run from your laptop 🚀</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Sat, 20 Jan 2024 06:41:36 +0000</pubDate>
      <link>https://dev.to/tanyarai/ai-playground-you-can-run-from-your-laptop-2ee5</link>
      <guid>https://dev.to/tanyarai/ai-playground-you-can-run-from-your-laptop-2ee5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz3zcxuerbyh4mmamoy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz3zcxuerbyh4mmamoy1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every developer building with AI has felt like this. &lt;/p&gt;

&lt;p&gt;Wrangling APIs for different models and moving from one-off playgrounds to production code has become one of the biggest pains of generative AI development. &lt;/p&gt;

&lt;p&gt;But what if there was a way to streamline all this complexity?&lt;/p&gt;

&lt;p&gt;Introducing &lt;strong&gt;AIConfig Editor - an AI Playground that you can run directly from your laptop.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This playground is a one-stop solution to manage multiple AI models, effortlessly compare their results, and smoothly transition your hard work from the prototype stage to production.&lt;/p&gt;

&lt;p&gt;All your work is backed to a config JSON that can then be used in code with the AIConfig SDK.&lt;/p&gt;

&lt;p&gt;Open the editor whenever you want to edit your prompts in the config - without having to touch ur code or config!&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it now!
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;pip3 install python-aiconfig&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;&lt;code&gt;export OPENAI_API_KEY='your-key'&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;aiconfig edit&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Please star our repo: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcpwcpz44l36f9blqj3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcpwcpz44l36f9blqj3o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use any from OpenAI, HuggingFace, Anyscale, and more.&lt;/li&gt;
&lt;li&gt;Supports text, image, and audio models (local or remote)&lt;/li&gt;
&lt;li&gt;Run models on same prompt&lt;/li&gt;
&lt;li&gt;Create prompt templates&lt;/li&gt;
&lt;li&gt;Chain models in UI&lt;/li&gt;
&lt;li&gt;All prompts/settings from UI are saved to config which can be used with an SDK in code&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Github
&lt;/h2&gt;

&lt;p&gt;Please give us a star on Github to support our open-source project: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Happy coding!!&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>python</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Multimodal Chaining: GPT4 with ElevenLabs 🔊</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Wed, 03 Jan 2024 15:01:18 +0000</pubDate>
      <link>https://dev.to/tanyarai/multimodal-chaining-gpt4-with-elevenlabs-1k8o</link>
      <guid>https://dev.to/tanyarai/multimodal-chaining-gpt4-with-elevenlabs-1k8o</guid>
      <description>&lt;h3&gt;
  
  
  ⭐️ Support our open-source project: &lt;a href="https://github.com/lastmile-ai/aiconfig/" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig/&lt;/a&gt; ⭐️
&lt;/h3&gt;

&lt;p&gt;Multimodal chaining, the process of chaining AI models of different modalities, is a powerful tool for developers. However, it can also be a complex and challenging task. The need to switch between different models, manage their parameters, and ensure they work seamlessly together can be a daunting task. This complexity can hinder the creative process, slowing down the development of your AI applications.&lt;/p&gt;

&lt;p&gt;Enter AI Workbooks, an open-source platform designed to simplify this process. And now, we're excited to announce the addition of ElevenLabs Multilingual V2 to the models you can use with AI Workbooks!&lt;/p&gt;

&lt;h2&gt;
  
  
  ElevenLabs: Text-to-Speech AI
&lt;/h2&gt;

&lt;p&gt;ElevenLabs Multilingual V2 is one of the most advanced text-to-speech AI solutions available today. It can recognize nearly 30 written languages and translate them into speech. The generated speech retains the speaker's unique vocal identity and accent across all languages, ensuring a consistent and authentic user experience for your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Workbooks: Your AI Development Platform
&lt;/h2&gt;

&lt;p&gt;AI Workbooks is a multi-modal platform for developers to experiment with generative AI models before going to production. It provides a centralized environment where you can manage all your models, their parameters, and even version control your experiments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo: Build a Research Summary Podcast
&lt;/h2&gt;

&lt;p&gt;Let's take a look at how multimodal chaining with AI Workbooks, GPT-4, and ElevenLabs Multilingual V2 can work together to create a podcast that summarizes research papers.&lt;/p&gt;

&lt;p&gt;First, GPT-4 is used to generate a digestible summary of a research paper abstract. This model is capable of understanding complex academic language and translating it into a more accessible format.&lt;/p&gt;

&lt;p&gt;Next, ElevenLabs Multilingual V2 takes this summary and converts it into speech. The result is a podcast soundbyte that provides a clear, concise summary of the research paper, ready for your audience to listen to.&lt;/p&gt;

&lt;p&gt;This is just one example of the power of multimodal chaining with AI Workbooks. The possibilities are endless!&lt;/p&gt;

&lt;h2&gt;
  
  
  Ready to Get Started?
&lt;/h2&gt;

&lt;p&gt;Check out the AI Workbook (clone to edit): &lt;a href="https://lastmileai.dev/workbooks/clqkgdhb400ebqw1gm9rura3r" rel="noopener noreferrer"&gt;https://lastmileai.dev/workbooks/clqkgdhb400ebqw1gm9rura3r&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⭐️ Don't forget to give us a star on Github! &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Playground for Generative AI</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Thu, 21 Dec 2023 00:06:46 +0000</pubDate>
      <link>https://dev.to/tanyarai/playground-for-generative-ai-40nc</link>
      <guid>https://dev.to/tanyarai/playground-for-generative-ai-40nc</guid>
      <description>&lt;p&gt;Today I came across a tweet from &lt;a href="https://twitter.com/umesh_ai" rel="noopener noreferrer"&gt;@umesh_ai&lt;/a&gt; that shared a prompt for "ultra-detailed sculpture" using DALL-E3. You had to replace the text for object to get your results. It was incredible to see what people created!! But it wasn't super easy to keep trying different objects...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu7dc1wtar7thejt2g5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu7dc1wtar7thejt2g5d.png" alt="thread" width="800" height="1120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wanted to make it easy for people to easy enter in an object and run the prompt with DALL-E3 - maybe even try different models too.&lt;/p&gt;

&lt;p&gt;I created an AI Workbook that takes in the prompt from the tweet so you can play around with it! &lt;/p&gt;

&lt;p&gt;Check out this marble hedgehog - &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca4gyoy2h4c2f0ydu7tv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca4gyoy2h4c2f0ydu7tv.png" alt="hedgehog" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try it yourself: &lt;a href="https://lastmileai.dev/workbooks/clqectmf0007iqrce0hkh0nm2" rel="noopener noreferrer"&gt;https://lastmileai.dev/workbooks/clqectmf0007iqrce0hkh0nm2&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Workbooks: A Developer's Sandbox for AI
&lt;/h2&gt;

&lt;p&gt;AI Workbooks is an open-source, multi-modal platform that serves as a sandbox for developers to experiment with AI models. It's a place where you can parameterize prompts for DALL-E 3 and other models, fine-tuning the inputs to achieve the desired artistic output.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Developer's Workflow with the AI Workbook
&lt;/h3&gt;

&lt;p&gt;The AI Workbook is designed with a developer-first approach. Its intuitive UI makes it accessible for those new to AI, while still offering the depth required by experienced developers. The ability to switch between AI models, compare outputs, and iterate on prompts is akin to refining code—each adjustment brings you closer to the optimal solution.&lt;/p&gt;

&lt;p&gt;When you're happy with your prompts/model settings, you can just download the AIConfig from the workbook - JSON config that you can easily use in your application code. &lt;/p&gt;

&lt;p&gt;Check out prompt templates and the AIConfig framework here: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Explore Multi-Modal Capabilities
&lt;/h3&gt;

&lt;p&gt;The AI Workbook's potential extends beyond visual art. It's a multi-modal platform where you can experiment with text, image, and audio models including &lt;strong&gt;the latest models like GPT4-V and DALL-E3&lt;/strong&gt; to name a couple. This opens up possibilities to so many applications. &lt;/p&gt;

&lt;p&gt;This multi-modal approach encourages developers to explore the intersection of different AI capabilities, fostering innovation and inspiring new ways to engage with technology.&lt;/p&gt;

&lt;p&gt;The AI Workbook is a testament to the power of open-source tools in democratizing AI and enabling developers to explore new frontiers of AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⭐️ Support Open-Source ⭐️
&lt;/h3&gt;

&lt;p&gt;Would really appreciate if you could &lt;strong&gt;star our &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;open-source project AIConfig&lt;/a&gt;&lt;/strong&gt; - the data model behind this workbook! &lt;/p&gt;

&lt;p&gt;Github: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>opensource</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>Master Prompt Engineering with OpenAI 🧠</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Tue, 19 Dec 2023 22:09:33 +0000</pubDate>
      <link>https://dev.to/tanyarai/master-prompt-engineering-with-openai-1he3</link>
      <guid>https://dev.to/tanyarai/master-prompt-engineering-with-openai-1he3</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ey8l9ynuvnmw33mgl6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ey8l9ynuvnmw33mgl6z.png" alt="meme" width="577" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;"Prompt Engineering" sometimes feels like persuasive penmanship in the guise of yet another phrase in the generative AI lexicon.&lt;/p&gt;

&lt;p&gt;But last week, OpenAI dropped their &lt;a href="https://platform.openai.com/docs/guides/prompt-engineering/six-strategies-for-ge" rel="noopener noreferrer"&gt;official Prompt Engineering Guide&lt;/a&gt; and it's been quite the hit with developers experimenting with generative AI. &lt;/p&gt;

&lt;p&gt;Prompt engineering isn't just about throwing words into the AI abyss and waiting for wisdom to echo back. It's about unlocking the potential of these very, very powerful AI models. It's nuanced, technical, and absolutely critical.&lt;/p&gt;

&lt;p&gt;Why does it matter? Because the right prompt can mean the difference between gibberish and genius. It's the difference between a model spitting out a generic response and one that tailors its knowledge to your specific needs.&lt;/p&gt;

&lt;p&gt;OpenAI's Six Strategies:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write clear instructions&lt;/li&gt;
&lt;li&gt;Provide reference text&lt;/li&gt;
&lt;li&gt;Split complex tasks into simpler subtasks&lt;/li&gt;
&lt;li&gt;Give the model time to "think"&lt;/li&gt;
&lt;li&gt;Use external tools&lt;/li&gt;
&lt;li&gt;Test changes systematically&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Okay, so maybe these seem generic but the guide actually does an extensive job with multiple tactics and examples for each strategy. &lt;/p&gt;

&lt;p&gt;So how do you apply these strategies out-of-the-box?&lt;/p&gt;

&lt;p&gt;Our team at &lt;a href="https://lastmileai.dev/" rel="noopener noreferrer"&gt;LastMile AI&lt;/a&gt; built a Streamlit app powered by AIConfig that lets you experiment with OpenAI's strategies on your prompts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup7ks3jby1rj5aqro7ye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup7ks3jby1rj5aqro7ye.png" alt="streamlit" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 Streamlit App: &lt;a href="https://openai-prompt-guide.streamlit.app/" rel="noopener noreferrer"&gt;https://openai-prompt-guide.streamlit.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We've created prompt templates for each strategy and stored them in an AIConfig.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This config isn't just supported for one model; it's set up to handle a stable of models: Gemini, GPT-4, GPT-3.5-turbo, and PaLM. And while the Streamlit app currently leverages GPT-3.5-turbo, the AIConfig can work with any of these models and more.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/tanya-rai-lm/streamlit_apps/blob/main/prompt-engineering-guide/openai_prompt_guide.aiconfig.json" rel="noopener noreferrer"&gt;Prompt Templates (AIConfig)&lt;/a&gt;&lt;br&gt;
🔗 &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;AIConfig Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, why AIConfig? &lt;/p&gt;

&lt;p&gt;Because it's about more than just managing prompts. It's about creating a reproducible, shareable, and systematic approach to prompt engineering. It's about making sure that when you find that perfect prompt, you can come back to it, time and time again, without sifting through the digital detritus of your past attempts.&lt;/p&gt;

&lt;p&gt;⭐️ Ready to take control of your prompts? Check out AIConfig. &lt;/p&gt;

&lt;p&gt;Please give our AIConfig repo a star to support the project! &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>devops</category>
      <category>news</category>
    </item>
    <item>
      <title>Master LLM Hallucinations 💭</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Fri, 15 Dec 2023 22:12:59 +0000</pubDate>
      <link>https://dev.to/tanyarai/master-llm-hallucinations-2210</link>
      <guid>https://dev.to/tanyarai/master-llm-hallucinations-2210</guid>
      <description>&lt;p&gt;Building with AI and LLMs is now a must-know for every developer. Every application is trying to integrate AI models. But hallucinations -- the phenomenon of AI models generating incorrect or unverified information -- are still an unsolved problem. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;"Ughh ChatGPT - I told you to NOT make stuff up!"&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcslqlgqcankv6pa1zjrl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcslqlgqcankv6pa1zjrl.jpeg" alt="meme" width="750" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Andrej Karpathy shared recently his take on hallucinations on &lt;a href="https://x.com/karpathy/status/1733299213503787018?s=20" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it." &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;So how do we fix it?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/pdf/2309.11495.pdf" rel="noopener noreferrer"&gt;Chain-of-Verification (CoVe)&lt;/a&gt;, a technique introduced by researchers at Meta, is one way. Let's dive into the high-level process of CoVe and then explore how we implemented a CoVe prompt template using AIConfig that you can use to reduce hallucinations in your LLM-powered apps. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Chain-of-Verification (CoVe) Process 🔗
&lt;/h2&gt;

&lt;p&gt;As documented in the white paper, the process involves 4 crucial steps:&lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Generate Baseline:&lt;/strong&gt; Given a query, the Large Language Model (LLM) generates a response.&lt;br&gt;
2️⃣ &lt;strong&gt;Plan Verification(s):&lt;/strong&gt; With the query and baseline response, the system formulates a list of verification questions. These would aid in analyzing potential inaccuracies within the original response.&lt;br&gt;
3️⃣ &lt;strong&gt;Execute Verification(s):&lt;/strong&gt; Each verification question is answered, and then cross-checked against the original response to discern inconsistencies or flaws.&lt;br&gt;
4️⃣ &lt;strong&gt;Generate Final Response:&lt;/strong&gt; If inconsistencies are found, a revised response is generated, factoring in the results from the verification process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrate CoVe into your App with AIConfig💡
&lt;/h2&gt;

&lt;p&gt;We've brought the CoVe technique to life using AIConfig, streamlining the process to help reduce hallucinations in your LLM applications.&lt;/p&gt;

&lt;p&gt;Using AIConfig, we can separate the core application logic from the model components (prompts, model routing parameters, etc.). Here's what the prompt template looks like:&lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;GPT4 + Baseline Generation prompt:&lt;/strong&gt; This sets the foundation by generating the initial response using GPT4.&lt;br&gt;
2️⃣ &lt;strong&gt;GPT4 + Verification prompt:&lt;/strong&gt; This prompt creates a series of verification questions based on the initial response.&lt;br&gt;
3️⃣ &lt;strong&gt;GPT4 + Final Response Generation prompt:&lt;/strong&gt; Leveraging the findings from the verification stage, this prompt generates a final, more reliable response.&lt;/p&gt;

&lt;p&gt;🔗 AIConfig CoVE: &lt;a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to see it action? 👀 Try out our demo in Streamlit!!&lt;/p&gt;

&lt;p&gt;🎁 Streamlit App: &lt;a href="https://chain-of-verification.streamlit.app/" rel="noopener noreferrer"&gt;https://chain-of-verification.streamlit.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0ngjaoyaucpjsydqepz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0ngjaoyaucpjsydqepz.png" alt="streamlit" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are you already using AIConfig or CoVe in your projects? Feel free to share your experiences in the comments below.&lt;/p&gt;

&lt;p&gt;Liked the post?&lt;/p&gt;

&lt;p&gt;Show your support by starring our project on GitHub! ⭐️ &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>Harness the power of multiple LLMs 🤝</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Wed, 13 Dec 2023 04:13:16 +0000</pubDate>
      <link>https://dev.to/tanyarai/harness-the-power-of-multiple-llms-437d</link>
      <guid>https://dev.to/tanyarai/harness-the-power-of-multiple-llms-437d</guid>
      <description>&lt;p&gt;When dealing with LLMs like GPT-4, PaLM2, etc., we can often get varying outputs. This begs the question - how can we easily assess what the actual output is given these varied responses? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz57mqabkp10dx1q8yx3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz57mqabkp10dx1q8yx3.jpeg" alt="spiderman"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where a chain-of-thought prompting comes into play. By passing the same prompt across different LLMs, we can use them to verify and enhance the accuracy of the final output. In the example, we show here we use "marjority-vote / quorum" amongst the responses to determine the final output. You can of course use other heuristics as well. The idea is inspired by the principles outlined in the "Self Consistency" paper, which looks at how sampling a diverse set of reasoning paths from a single language model can improve consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use multiple LLMs in seconds 🤖
&lt;/h2&gt;

&lt;p&gt;AIConfig steps in as the orchestrator to easily coordinate your prompts against multiple LLMs without having to wrangle with the different APIs and providers. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prompt each LLM&lt;/strong&gt;: We send out the same single prompt from the AIConfig to several LLMs to get various reasoning paths. With a simple, &lt;code&gt;config.run&lt;/code&gt; and setting the model name for the prompt template, you're able to get your desired output across several models. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluate Responses&lt;/strong&gt;: AIConfig then uses GPT-4 (or any other model you set up) to determine a quorum—basically, a majority vote on the best response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The benefits? You get enhanced accuracy and a safety net in case one LLM hallucinates and produces the wrong output!&lt;/p&gt;

&lt;p&gt;🎁 Template for Multi-LLM Consistency: &lt;a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Multi-LLM-Consistency&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;⭐️ We recently launched AIConfig as our first open-source project. Please support us with a star ⭐️ on Github!&lt;br&gt;
&lt;a href="https://github.com/lastmile-ai/aiconfig/tree/main" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig/tree/main&lt;/a&gt; &lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Using Facebook Research's Llama Guard to moderate ChatGPT convos</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Sat, 09 Dec 2023 21:11:04 +0000</pubDate>
      <link>https://dev.to/tanyarai/using-facebook-researchs-llama-guard-to-moderate-chatgpt-convos-32ha</link>
      <guid>https://dev.to/tanyarai/using-facebook-researchs-llama-guard-to-moderate-chatgpt-convos-32ha</guid>
      <description>&lt;h3&gt;
  
  
  We're trying to get more exposure for AIConfig. Please star our repo to help! ⭐️
&lt;/h3&gt;

&lt;p&gt;Github: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A couple days ago, &lt;a href="https://ai.meta.com/blog/purple-llama-open-trust-safety-generative-ai/" rel="noopener noreferrer"&gt;Meta announced Purple Llama&lt;/a&gt; and as a first step, released Llama Guard - a safety classifier for input/output filtering. Llama Guard enables classifying text based on unsafe categories (e.g. Violence &amp;amp; Hate, Criminal Planning, etc.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh755in0qyp13zpl3li7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh755in0qyp13zpl3li7j.png" alt="llama_guard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're using Llama Guard to classify the user/agent responses from ChatGPT through AIConfig: &lt;br&gt;
&lt;a href="https://colab.research.google.com/drive/1CfF0Bzzkd5VETmhsniksSpekpS-LKYtX#scrollTo=dAjjPrTq16z1" rel="noopener noreferrer"&gt;https://colab.research.google.com/drive/1CfF0Bzzkd5VETmhsniksSpekpS-LKYtX#scrollTo=dAjjPrTq16z1&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  AIConfig x LLama Guard
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;AIConfig&lt;/a&gt; is a framework that makes it easy to build generative AI applications quickly and reliably in production.&lt;/p&gt;

&lt;p&gt;It manages generative AI prompts, models and settings as JSON-serializable configs that you can version control, evaluate, and use in a consistent, model-agnostic SDK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/" rel="noopener noreferrer"&gt;LLaMA Guard&lt;/a&gt; is  an LLM-based input-output safeguard model.&lt;/p&gt;

&lt;p&gt;This example shows how to use AIConfig to wrap GPT-3.5 calls LLaMA Guard and classify them as safe or unsafe.&lt;/p&gt;

&lt;p&gt;Please let us know if you have feedback or questions on AIConfig! &lt;/p&gt;

&lt;p&gt;Join our discord: &lt;a href="https://discord.com/invite/xBhNKTetGx" rel="noopener noreferrer"&gt;https://discord.com/invite/xBhNKTetGx&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>opensource</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Mastering Prompt Management 💫</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Fri, 08 Dec 2023 05:44:29 +0000</pubDate>
      <link>https://dev.to/tanyarai/mastering-prompt-management-fbn</link>
      <guid>https://dev.to/tanyarai/mastering-prompt-management-fbn</guid>
      <description>&lt;p&gt;Prompt engineering is both an art and science - a blend of creativity and structure that, when done right, can unlock the vast potential of generative AI. &lt;/p&gt;

&lt;p&gt;But without a proper management system, it’s easy to find ourselves in deep waters, trying to remember that one perfect prompt that just gave us outstanding results.&lt;/p&gt;

&lt;p&gt;Imagine every AI developer's workspace — lines of prompts saved in countless text files, tabs upon tabs of research on how to craft a perfect prompt. Does it feel overwhelming? &lt;/p&gt;

&lt;p&gt;You're not alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Art and Science Behind Prompt Management
&lt;/h2&gt;

&lt;p&gt;Let's unpack this: With the growing reliance on generative AI models such as ChatGPT, DALL-E, and others, the way we manage prompts directly impacts the quality of results we get, and ultimately, how effectively we can harness AI's capabilities.&lt;/p&gt;

&lt;p&gt;For many of us, managing prompts starts off manageable. A few Google Docs here, a couple of bookmarked pages there. But as you dive deeper, you might find yourself scattered across different platforms, struggling to remember where that one magic prompt formula was saved—the one that delivered the content that left you in awe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f3mpwzvm0o8u823bpof.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f3mpwzvm0o8u823bpof.jpeg" alt="Image description" width="660" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yes, that desperation hits when you're combing through your digital haystack, looking for the prompt that returned the best results. We've all been there, thinking we could keep it all in our heads, or reassured by our meticulously named docs—until it’s not enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  AIConfig: Your Prompt Management Savior!
&lt;/h2&gt;

&lt;p&gt;AIConfig is an application framework designed specifically for building with generative AI. Imagine having all your prompts, model parameters, and different models managed in one centralized, version-controlled environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Too good to be true? Not at all!&lt;/p&gt;

&lt;p&gt;With AIConfig, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Version control your prompts:&lt;/strong&gt; Keep track of different versions of your prompts in a config-based approach that is familiar to developers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralize your models:&lt;/strong&gt; Say goodbye to switching between different applications to use different AI models. Access and manage them all in one place. Easily swap between models with AIConfig. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-tune parameter settings:&lt;/strong&gt; Iteratively adjust and record different model parameters to hone in on the most effective combinations for your tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaborate with others:&lt;/strong&gt; Easily share your setup, ensuring that collaborative projects are synchronized and streamlined.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reproduce results:&lt;/strong&gt; With fully tracked changes, you can go back to any previous configuration and understand why a particular prompt gave you those eye-opening results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No more digging:&lt;/strong&gt; AIConfig keeps all your prompts and configurations neatly organized.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AIConfig brings the discipline of software development to the innovation of AI prompt engineering, making it a powerhouse tool for developers.&lt;/p&gt;

&lt;p&gt;So here you are, at the end of this virtual scroll—wondering whether or not you're ready to take your prompt management to the next level. Ask yourself if you're tired of the digital scavenger hunt, the mess of tabs, the scattered files. Are you ready to say, "I don't need it... I don't need it... I definitely need AIConfig!"&lt;/p&gt;

&lt;p&gt;How do you currently manage your prompts and model parameters? Share your stories, questions, or tips in the comments below. And if you're already using AIConfig, we'd love to hear about your experiences too. &lt;/p&gt;

&lt;p&gt;AIConfig is our first OSS project at LastMileAI. Please support our us by starring our repo ⭐️: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;  &lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>#1 Developer Tool of the Month 🔥🔥🔥</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Wed, 06 Dec 2023 03:40:42 +0000</pubDate>
      <link>https://dev.to/tanyarai/1-developer-tool-of-the-month-1o41</link>
      <guid>https://dev.to/tanyarai/1-developer-tool-of-the-month-1o41</guid>
      <description>&lt;p&gt;It's been a little over a month since we launched LastMile AI on Product Hunt and wanted to share some insights and where we are now.&lt;/p&gt;

&lt;p&gt;Here’s how we stacked up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;#1 Developer Tool of the Month&lt;/li&gt;
&lt;li&gt;#2 Product of the Day&lt;/li&gt;
&lt;li&gt;#4 Product of the Week&lt;/li&gt;
&lt;li&gt;956 Upvoters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. PH is still a good lever for developer tools&lt;/strong&gt;&lt;br&gt;
Our team was a little hesitant if PH was going to be the right community given that we are a developer platform for generative AI and we've observed the best products on PH tend to be more consumer-focused. However, we were pleasantly surprised by the results. We not only saw a spurt in growth - we saw a lot of interest in the following weeks after from developers, companies, and investors. The 2nd order effects of PH proved really helpful for us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Tailor your content and messaging&lt;/strong&gt;&lt;br&gt;
We spent a great deal of time of nailing our messaging in a way that was palatable to not only developers but the broader PH community as well. If you’re a dev tool, it’s good to tailor your messaging and media assets to the PH community since it will resonate more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Engage with the community - fast!&lt;/strong&gt;&lt;br&gt;
On launch day, we had the whole team on board to respond to messages almost instantly. This was a round the clock effort bby the team to show that we were actively engaged and interested in what feedback we were receiving from the PH community. We left no questions or comments answered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Prepare your network/product community ahead of time.&lt;/strong&gt;&lt;br&gt;
We let our network and product community know that we had planned to launch on PH for the first time. It was super valuable for us to have the support from our existing users and network and they came through for us. We recommend having a Partiful to manage messaging to contacts and power users that will show up for you on launch day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Focus on use cases and value prop&lt;/strong&gt;&lt;br&gt;
Sometimes developer tools focus a lot on technical details which can deter from the value proposition of the product. Remember to really focus on the value your tool provides to the end user whether its making development faster or collaboration easier. These are the messages that are more likely to resonate on PH!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are we up to now at LastMile AI?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We recently launched AIConfig - our first open source project. AIConfig is a framework that makes it easy to build generative AI applications for production. It manages generative AI prompts, models and model parameters as JSON-serializable configs that can be version controlled, evaluated, monitored and opened in a notebook playground for rapid prototyping.&lt;/p&gt;

&lt;p&gt;Check it out here! &lt;a href="https://github.com/lastmile-ai/a" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/a&lt;/a&gt;...&lt;/p&gt;

&lt;p&gt;Feel free to ask me anything about our PH launch or AIConfig!&lt;/p&gt;

</description>
      <category>producthunt</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Reduce Hallucinations 💤</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Tue, 05 Dec 2023 23:52:33 +0000</pubDate>
      <link>https://dev.to/tanyarai/reduce-hallucinations-55m7</link>
      <guid>https://dev.to/tanyarai/reduce-hallucinations-55m7</guid>
      <description>&lt;p&gt;Hallucinations are a big problem when working with LLMs.&lt;/p&gt;

&lt;p&gt;We made a simple Colab template to help you verify your responses from LLMs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3pdjf6wjhdw91r4z4ri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3pdjf6wjhdw91r4z4ri.png" alt="Image description" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The template uses &lt;strong&gt;Chain-of-Verification (CoVe)&lt;/strong&gt;, a prompt engineering technique developed from research at Meta AI. CoVe enhances the accuracy of LLMs by generating a baseline response to a user query and then validating the information through a series of verification questions. This iterative process helps to correct any errors in the initial response, leading to more accurate and trustworthy outputs.&lt;/p&gt;

&lt;p&gt;The template is backed by &lt;strong&gt;AIConfig&lt;/strong&gt; - a JSON serializable format for models, prompts, and model parameters. You can easily switch out the models used for this template and use the AIConfig directly in your application code. &lt;/p&gt;

&lt;p&gt;Try it here: &lt;a href="https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification" rel="noopener noreferrer"&gt;colab notebook&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Please &lt;a href="https://github.com/lastmile-ai/aiconfig/tree/main" rel="noopener noreferrer"&gt;&lt;strong&gt;star our repo for AIConfig&lt;/strong&gt;&lt;/a&gt;! It's our first open-source project and we're thrilled to share it with you. ⭐️&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>opensource</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Building with generative AI 🚀</title>
      <dc:creator>tanya rai</dc:creator>
      <pubDate>Tue, 05 Dec 2023 00:40:30 +0000</pubDate>
      <link>https://dev.to/tanyarai/ditch-the-hassle-get-your-genai-code-productionized-now-3f2l</link>
      <guid>https://dev.to/tanyarai/ditch-the-hassle-get-your-genai-code-productionized-now-3f2l</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we'll show you how to streamline the way you integrate generative AI models into production! &lt;/p&gt;

&lt;h2&gt;
  
  
  LastMile AI - dev platform for generative AI 🚀
&lt;/h2&gt;

&lt;p&gt;A quick background about LastMile AI. We are a developer platform for engineering teams to go from prototype to production with generative AI models. Our platform is multi-modal and model-agnostic so you aren't tied to dependencies on a single provider. &lt;/p&gt;

&lt;p&gt;We recently launched our first open-source project AIConfig! Check us out and give us feedback ⭐️: &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Demo Video - &lt;a href="https://www.youtube.com/watch?v=ej85StfHEFo" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=ej85StfHEFo&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2usoguydxxefas36nd7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2usoguydxxefas36nd7.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Let's get started
&lt;/h2&gt;

&lt;p&gt;First, go to &lt;a href="http://www.lastmileai.dev" rel="noopener noreferrer"&gt;www.lastmileai.dev&lt;/a&gt; and sign in for free. &lt;/p&gt;

&lt;p&gt;You'll land on an AI Workbook. AI Workbooks are a notebook-like interface where you can experiment text, image, and audio AI models from various providers. You can prototype and parametrize your prompt chains all in a single place. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o8qfzssme4xt877ls3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o8qfzssme4xt877ls3c.png" alt="empty_wb" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Write your prompt chains
&lt;/h2&gt;

&lt;p&gt;We are going to prototype a simple trip planner. &lt;/p&gt;

&lt;p&gt;First prompt: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select 'ChatGPT' as model (should be default). &lt;/li&gt;
&lt;li&gt;Add global parameter '&lt;code&gt;city&lt;/code&gt;' with value 'London'. &lt;/li&gt;
&lt;li&gt;Name cell as '&lt;code&gt;get_activities&lt;/code&gt;'&lt;/li&gt;
&lt;li&gt;Write prompt 'Tell me the top 5 fun attractions in &lt;code&gt;{{city}}&lt;/code&gt;'&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhlyekhtol8nbjm3ja1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhlyekhtol8nbjm3ja1s.png" alt="first_prompt" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our second prompt: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add a new cell but change model to 'GPT4'. &lt;/li&gt;
&lt;li&gt;Add a local parameter for the cell &lt;code&gt;{{cuisine}}&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;Add a system prompt to format the response (see image) &lt;/li&gt;
&lt;li&gt;Write prompt 'generate a one-day personalized itinerary based on 1/ my favorite cuisine: &lt;code&gt;{{cuisine}}&lt;/code&gt;, 2/ list of activities: &lt;code&gt;{{get_activities.output}}&lt;/code&gt;'&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzqx1cc4xpxl2p6fedp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzqx1cc4xpxl2p6fedp3.png" alt="second_prompt" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI Workbook - &lt;a href="https://lastmileai.dev/workbooks/clpdo3pnl019dpejj3kvl49g2" rel="noopener noreferrer"&gt;https://lastmileai.dev/workbooks/clpdo3pnl019dpejj3kvl49g2&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Download your AIConfig
&lt;/h2&gt;

&lt;p&gt;AIConfig is a JSON serializable format that contains your prompts, model parameters, and settings. It is essentially your generative AI artifact to be managed in source control and easily shared across applications. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffglc3h4mi4jylzch1lv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffglc3h4mi4jylzch1lv.png" alt="download_config" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Use AIConfig in your application code
&lt;/h2&gt;

&lt;p&gt;Here are a few of the python SDK commands on how you can use AIConfig easily in your application code. &lt;/p&gt;

&lt;p&gt;But first why store your generative AI components in a config? It enables you to: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decouple your application code from the generative AI components which decreases complexity &lt;/li&gt;
&lt;li&gt;Manage your generative AI components in source control &lt;/li&gt;
&lt;li&gt;Evaluate and monitor performance across models&lt;/li&gt;
&lt;li&gt;Iterate fast - switch models and try different parameters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Colab Notebook: &lt;a href="https://colab.research.google.com/drive/1RlGQmtR0uK7OTI5nG10E219JoH2mgAQr#scrollTo=Kd6BFPGiHPJe" rel="noopener noreferrer"&gt;https://colab.research.google.com/drive/1RlGQmtR0uK7OTI5nG10E219JoH2mgAQr#scrollTo=Kd6BFPGiHPJe&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Install python package
&lt;/h3&gt;

&lt;p&gt;AIConfig SDK is available in python and Typescript. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztjoyn0kblz8l9qzgikm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztjoyn0kblz8l9qzgikm.png" alt="install" width="692" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Load your AI Config
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmsfc0pqor495ron1fpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmsfc0pqor495ron1fpa.png" alt="load" width="800" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Run your prompt
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5509i3h7d7c7ydj1nys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5509i3h7d7c7ydj1nys.png" alt="run" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Run prompt with different parameter value
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lx296nhn2wfj3ckpg27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lx296nhn2wfj3ckpg27.png" alt="diff_params" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Add a new prompt to your AIConfig
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8cvb0puancgtghbfirg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8cvb0puancgtghbfirg.png" alt="add_new" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Save outputs in your AIConfig
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns05old9vbm8qxdrw2ow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns05old9vbm8qxdrw2ow.png" alt="save" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AIConfig is our first open-source project and we'd love your help. See our &lt;a href="https://aiconfig.lastmileai.dev/docs/contributing/" rel="noopener noreferrer"&gt;contributing guidelines&lt;/a&gt; -- we would especially love help adding support for additional models that the community wants.&lt;br&gt;
⭐️ &lt;a href="https://github.com/lastmile-ai/aiconfig" rel="noopener noreferrer"&gt;https://github.com/lastmile-ai/aiconfig&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
