<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Natalie Fagundo</title>
    <description>The latest articles on DEV Community by Natalie Fagundo (@natalie_inductor).</description>
    <link>https://dev.to/natalie_inductor</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/natalie_inductor"/>
    <language>en</language>
    <item>
      <title>Open sourcing our new Text-to-SQL LLM app starter template</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Thu, 26 Sep 2024 20:18:09 +0000</pubDate>
      <link>https://dev.to/inductor_ai/open-sourcing-our-new-text-to-sql-llm-app-starter-template-1cc8</link>
      <guid>https://dev.to/inductor_ai/open-sourcing-our-new-text-to-sql-llm-app-starter-template-1cc8</guid>
      <description>&lt;p&gt;We are excited to announce the release of Inductor’s latest open source LLM application starter template, for building Text-to-SQL LLM apps (&lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/text_to_sql" rel="noopener noreferrer"&gt;GitHub repo here&lt;/a&gt;).This template is designed to make it easier than ever for developers to build and deploy AI apps that can convert natural language into SQL queries, execute them on a database, and return actionable insights. Whether you're looking to create a tool for data analysts, automate reporting, or build an internal knowledge assistant capable of answering complex data-related questions, this starter template provides everything you need to get up and running quickly.&lt;/p&gt;

&lt;p&gt;Just like our other LLM app starter templates, the Text-to-SQL app template offers more than just a basic structure for building your app. It integrates a complete end-to-end developer workflow that supports the iterative nature of building production-ready LLM applications. It’s designed for rapid prototyping, robust testing, and ongoing optimization – key requirements for anyone building enterprise-grade solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key features
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Playground for prototyping:&lt;/strong&gt; Leverage Inductor's playground feature to experiment with different queries and app configurations interactively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Built-in testing and evaluation:&lt;/strong&gt; A full Inductor-powered test suite is included to systematically evaluate the performance of the app across various scenarios.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated experimentation and optimization:&lt;/strong&gt; Iterate quickly with hyperparameters (e.g., to optimize prompts and choice of model) using Inductor’s powerful experimentation and optimization tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Comprehensive logging:&lt;/strong&gt; Capture and analyze real-time interactions and SQL query executions with Inductor’s integrated logging and observability tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seamless SQL generation:&lt;/strong&gt; Convert user queries into SQL that interacts with your database to retrieve information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrated SQL query validation:&lt;/strong&gt; Ensure the validity of SQL queries before execution, improving app reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Schema-aware interactions:&lt;/strong&gt; Automatically generate SQL based on your database schema, so that queries are valid and relevant.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy customization:&lt;/strong&gt; This template is built to be easy to apply to your database, and also framework-agnostic, making it easy to integrate into any LLM-powered system.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/text_to_sql" rel="noopener noreferrer"&gt;Text-to-SQL LLM app starter template&lt;/a&gt; is your gateway to creating LLM-powered applications that turn natural language into database queries with ease. Whether you’re just starting or looking to scale your data-driven tools, this template is designed to accelerate your development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  A systematic developer workflow for delivering production-grade Text-to-SQL LLM apps
&lt;/h2&gt;

&lt;p&gt;This template is pre-configured to leverage &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/text_to_sql" rel="noopener noreferrer"&gt;Inductor’s developer platform&lt;/a&gt;. Inductor provides the tools you need to build and deliver next-gen LLM applications, offering a systematic approach to every phase of development and deployment such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prototyping playground:&lt;/strong&gt; &lt;a href="https://app.inductor.ai/docs/quickstart.html#sharing-your-playground" rel="noopener noreferrer"&gt;Share prototypes&lt;/a&gt; with your team in a secure environment that integrates with your tests and logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robust test suites:&lt;/strong&gt; Test for accuracy, consistency, and edge cases with test cases and quality measures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive logging:&lt;/strong&gt; Monitor live executions to understand real-world user interactions, identify issues, and optimize application performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated experimentation:&lt;/strong&gt; Use hyperparameters to experiment with different configurations (e.g., different models or prompts), enabling rapid iteration and optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By building with Inductor’s platform, you ensure that your LLM application isn’t just functional – it’s optimized for &lt;strong&gt;reliability&lt;/strong&gt;, &lt;strong&gt;usefulness&lt;/strong&gt;, and &lt;strong&gt;business impact&lt;/strong&gt;. Let’s dive deeper and see how Inductor integrates into the development lifecycle for this specific Text-to-SQL application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;Auto-generate playgrounds&lt;/a&gt;: Experimentation is key when building LLM applications. With a single command (&lt;code&gt;inductor playground app:get_analytics_results&lt;/code&gt;), you can create a fully interactive UI for real-time interactive testing that can be &lt;a href="https://app.inductor.ai/docs/quickstart.html#sharing-your-playground" rel="noopener noreferrer"&gt;shared with domain specialists&lt;/a&gt;. This enables developers and stakeholders to experiment with the app’s behavior without writing additional code. Within an Inductor playground you can view past executions, add executions to test suites, see logged values, and experiment with different configurations using developer-controlled &lt;a href="https://app.inductor.ai/docs/quickstart.html#using-hyperparameters" rel="noopener noreferrer"&gt;hyperparameters&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;Systematic testing with Inductor&lt;/a&gt;: Inductor makes it easy to have quality control baked into the development process. The template includes a pre-configured &lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;test suite&lt;/a&gt; that enables developers to run systematic tests on their Text-to-SQL app. These tests ensure that the generated SQL is accurate, valid, and capable of addressing various query types.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;Live monitoring and debugging&lt;/a&gt;: Understanding how your application performs under real-world conditions is essential. With Inductor’s &lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;logging and monitoring features&lt;/a&gt;, you can track every interaction in real-time, enabling you to catch and address issues as they arise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.inductor.ai/docs/quickstart.html#using-hyperparameters" rel="noopener noreferrer"&gt;Automated experimentation&lt;/a&gt;: Use &lt;a href="https://app.inductor.ai/docs/quickstart.html#using-hyperparameters" rel="noopener noreferrer"&gt;hyperparameters&lt;/a&gt; to experiment with different configurations (e.g., different models or prompts), enabling rapid iteration and optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unlocking business potential with Text-to-SQL
&lt;/h2&gt;

&lt;p&gt;The Text-to-SQL LLM app starter template is a powerful entry point for businesses looking to leverage AI to interact with data in more intuitive ways. With this template, you can shift from needing to write complex, technical SQL queries to a conversational experience that enables anyone in your organization to pull insights from databases.&lt;/p&gt;

&lt;p&gt;But, that’s just the beginning.  By customizing and extending this template, you can evolve your LLM-powered application from simply querying data to driving business actions. For example, you can enable non-technical users to generate SQL queries to analyze performance metrics and then follow up with automated actions like sending reports or triggering alerts – all powered by natural language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future expectations for enterprise LLM applications
&lt;/h2&gt;

&lt;p&gt;As enterprises continue to build and adopt LLM-powered applications, the iterative process of evolving from simple data interactions to business actions and impact will redefine how businesses operate and serve their customers. Technologies like Text-to-SQL AI apps offer a powerful starting point, transforming data access into an intuitive experience. However, the future lies in building on this foundation – moving from querying data to influencing real-world actions with seamless, AI-driven interactions.&lt;/p&gt;

&lt;p&gt;Looking forward, enterprises that embrace the iterative process of developing and delivering AI apps will be best positioned to harness the full potential of LLMs. Those that continuously refine their applications to meet changing user needs and leverage the power of AI across departments will drive meaningful improvements in KPIs, from revenue growth to customer satisfaction. The roadmap for LLM applications is clear: evolve, iterate, and deliver AI-driven outcomes that shape the future of business.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://inductor.ai/" rel="noopener noreferrer"&gt;Inductor&lt;/a&gt; is essential for accelerating this evolution. As a comprehensive developer platform, Inductor empowers teams to rapidly prototype, test, iterate on, and monitor their LLM applications. With features like robust test suites, automated experimentation, and comprehensive logging, Inductor ensures that developers can seamlessly build, evaluate, improve, and observe their AI-powered applications without the complexity of building and managing the requisite tools and infrastructure. This iterative approach enables enterprises to continuously refine their applications in order to ship rapidly, make them more adaptable to shifting market demands, and improve business outcomes at every stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ready to build your own Text-to-SQL LLM app?
&lt;/h2&gt;

&lt;p&gt;Getting started is easy. Simply visit the &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/text_to_sql" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;, clone the &lt;strong&gt;Text-to-SQL LLM app starter template&lt;/strong&gt;, and follow the instructions to get your application up and running in minutes. With Inductor, you’ll have access to powerful tools for experimenting, testing, and optimizing your app every step of the way.&lt;/p&gt;

&lt;p&gt;🚀 Start building your &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/text_to_sql" rel="noopener noreferrer"&gt;Text-to-SQL&lt;/a&gt; app today and explore the potential of integrating natural language interfaces with your enterprise’s data systems!&lt;/p&gt;

&lt;p&gt;💻 Want to learn more? Dive into our &lt;a href="https://app.inductor.ai/docs/index.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; or &lt;a href="https://inductor.ai/contact-us" rel="noopener noreferrer"&gt;book a demo&lt;/a&gt; to see how Inductor can help you build AI-powered applications that drive real business value. 🛠️&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>sql</category>
      <category>ai</category>
      <category>database</category>
    </item>
    <item>
      <title>Open sourcing our new Chat With Your PDFs LLM app starter template</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Thu, 19 Sep 2024 23:41:39 +0000</pubDate>
      <link>https://dev.to/inductor_ai/open-sourcing-our-new-chat-with-your-pdfs-llm-app-starter-template-2kop</link>
      <guid>https://dev.to/inductor_ai/open-sourcing-our-new-chat-with-your-pdfs-llm-app-starter-template-2kop</guid>
      <description>&lt;p&gt;We are thrilled to announce the release of Inductor’s newest open source LLM application starter template: &lt;strong&gt;Chat With Your PDFs&lt;/strong&gt; (&lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/chat_with_pdfs" rel="noopener noreferrer"&gt;GitHub repo here&lt;/a&gt;). This template is designed to empower developers to quickly build and ship a conversational AI bot that can interact with, answer questions about, and extract information from PDF documents. Whether you’re creating a knowledge base search bot, a customer support bot, a document review assistant, or simply want to build AI that can intelligently answer questions about a collection of PDFs, this starter template provides the foundation that you need.&lt;/p&gt;

&lt;p&gt;Much like our previous starter template releases, this template goes beyond just providing the basic structure for an LLM application. It includes an end-to-end developer workflow tailored to the iterative nature of building production-ready LLM applications. Key features of this template include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;Robust test suites&lt;/a&gt;: Ensure the accuracy and reliability of your PDF chatbot with systematic testing that covers various interaction scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;Automated experimentation and optimization&lt;/a&gt;: Quickly iterate on and refine your application by experimenting with different models, prompts, and configurations using Inductor &lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;hyperparameters&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;Instant prototyping playground&lt;/a&gt;: A secure, auto-generated environment that enables you to prototype and share your PDF chatbot with your team, fully integrated with your test and experimentation setups.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;Comprehensive logging&lt;/a&gt;: Monitor live traffic, understand user interactions, resolve issues, and continuously improve your application with integrated logging and observability.&lt;/li&gt;
&lt;li&gt;PDF parsing and embedding: Leverage Unstructured for smart chunking and Sentence-Transformers for embedding generation, transforming static PDFs into dynamic, interactive knowledge sources. Complex documents, including those with images and tables, are processed and stored in a ChromaDB vector database for efficient retrieval based on user queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/chat_with_pdfs" rel="noopener noreferrer"&gt;Chat With Your PDFs&lt;/a&gt; template is your fast track to developing an LLM-powered bot that can handle complex document queries with ease. Whether you’re just starting or looking to scale, this template makes it simple to get up and running in minutes.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond consumer AI: why enterprises need custom LLM applications
&lt;/h2&gt;

&lt;p&gt;The rise of AI-powered knowledge assistants highlights the growing demand for intelligent systems that can seamlessly access and retrieve information. For example, products like ChatPDF, Liner, Eightify, and Phind leverage different data sources to enable chat assistants capable of querying and interacting with documents, web pages, videos, and research papers. These products enable users to extract key insights and summaries in real time, demonstrating the value of LLM-powered applications in various industries.&lt;/p&gt;

&lt;p&gt;These products exemplify how AI can enhance productivity by transforming static content into dynamic, interactive resources.  Yet, while they may work well for general consumer use cases, enterprises often require more customized and complex solutions to meet their unique demands.  For example, businesses dealing with legal documents, HR policy documents, or specialized training materials may need to handle deeper domain-specific queries, process multiple data formats, and integrate seamlessly with existing internal systems – demands that off-the-shelf products often don’t meet.&lt;/p&gt;

&lt;p&gt;This is where Inductor comes in. Enterprises often need flexible, tailor-made solutions, and Inductor provides a platform built specifically for LLM app development. Our tools enable developers to rapidly create production-ready AI applications that address unique business needs – for example for use cases such as internal knowledge management, legal compliance, or advanced research.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Unlocking the power of PDF parsing and embedding
&lt;/h2&gt;

&lt;p&gt;At the heart of this starter template is its ability to transform static PDFs into dynamic, conversational experiences. By leveraging &lt;a href="https://github.com/Unstructured-IO/unstructured" rel="noopener noreferrer"&gt;Unstructured&lt;/a&gt; for smart chunking and Sentence-Transformers for embedding generation, the template ensures that even complex documents are easy to interact with.&lt;/p&gt;

&lt;p&gt;Once the PDF content is processed, it's stored in a &lt;a href="https://www.trychroma.com/" rel="noopener noreferrer"&gt;ChromaDB&lt;/a&gt; vector database. This allows the application to efficiently retrieve the most relevant sections based on user queries, ensuring that the LLM has the context it needs to generate accurate, meaningful responses. Using retrieval-augmented generation (RAG), this system seamlessly integrates with an LLM, such as &lt;a href="https://platform.openai.com/docs/models/gpt-4o" rel="noopener noreferrer"&gt;GPT-4o&lt;/a&gt;, to provide users with answers and deeper insights from the documents.&lt;/p&gt;

&lt;p&gt;Whether you're navigating technical manuals, financial reports, or academic papers, this capability turns static PDFs into a rich source of interactive knowledge.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Supercharge your LLM app development with Inductor
&lt;/h2&gt;

&lt;p&gt;A key feature that sets the Chat With Your PDFs LLM starter template apart is its seamless integration with Inductor, a powerful platform built to streamline the full development and delivery lifecycle of your LLM application. From rapid prototyping to systematic testing and real-time monitoring, Inductor equips developers with the tools needed to build, refine, and deliver applications at every stage. Let’s dive into some of the standout features this integration provides.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Instantly generate a playground UI for experimentation
&lt;/h2&gt;

&lt;p&gt;Experimentation is at the heart of LLM application development, and with Inductor, you can spin up a fully interactive &lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;playground&lt;/a&gt; UI with a single command. This environment enables you to interactively test different queries, iterate on your application’s behavior, and share your Chat With Your PDFs app with teammates and subject matter experts – all without writing additional code. With your playground, you’ll have a flexible, no-code UI to rapidly iterate on and improve your LLM app. Just run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;inductor playground app:chat_with_pdf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Systematically evaluate your app using test suites
&lt;/h2&gt;

&lt;p&gt;The Chat With Your PDFs template includes a built-in set of &lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;test suites&lt;/a&gt; designed to test your application’s performance across various scenarios, providing actionable feedback so that you can quickly identify and address any issues. Each test suite includes pre-configured test cases, quality measures, and hyperparameters for evaluation of your app. An example of the results of running an included test suite can be found here.&lt;/p&gt;

&lt;p&gt;Whether you're validating how your LLM app handles complex queries or testing its response accuracy across different PDFs, Inductor enables you to rapidly do the evaluations that you need to improve your app efficiently.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Rapidly refine your app with Inductor hyperparameters
&lt;/h2&gt;

&lt;p&gt;Inductor &lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;hyperparameters&lt;/a&gt; enable you to rapidly test and compare different configurations of your LLM application. Hyperparameters are fully integrated in the Inductor platform where they can be leveraged in playgrounds, test suites, and live A/B testing. Using hyperparameters in playgrounds allows for controlled and collaborative interactive experimentation. Using hyperparameters in test suites allows for systematic selection of optimized configurations.&lt;/p&gt;

&lt;p&gt;Here are some examples of hyperparameters included in the Chat With Your PDFs template:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;query_num_chat_messages&lt;/code&gt;: Adjusts the number of previous chat messages used in the query for retrieving relevant information from your vector database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;query_result_num&lt;/code&gt;: Controls how many results are retrieved from the vector database for each query.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;query_filter_out_program_messages&lt;/code&gt;: Determines whether or not chatbot-generated messages are filtered out from the query sent to the vector database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging hyperparameters, you can rapidly run in-depth experiments on multiple different configurations and identify the optimal setup for your application.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor live executions in real time
&lt;/h2&gt;

&lt;p&gt;Understanding how your LLM app performs in real-world conditions is crucial. With Inductor’s &lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;live execution logging&lt;/a&gt;, every detail of your app’s execution is automatically recorded, including inputs, outputs, the specific text snippets retrieved by the RAG system, and more. This enables you to monitor your app’s performance in real time, giving you invaluable insights into user behavior, system efficiency, and areas for improvement.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 What’s Next?
&lt;/h2&gt;

&lt;p&gt;Ready to build your own AI-powered PDF chatbot? Get started in minutes! Simply visit the &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/chat_with_pdfs" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, clone the Chat With Your PDFs starter template, and follow the easy steps to start developing your LLM application. It’s that simple! 💻✨&lt;/p&gt;

&lt;p&gt;Want to dive deeper? Explore more about &lt;a href="https://inductor.ai/" rel="noopener noreferrer"&gt;Inductor&lt;/a&gt; in our &lt;a href="https://app.inductor.ai/docs/index.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; or &lt;a href="https://inductor.ai/contact-us" rel="noopener noreferrer"&gt;book a demo&lt;/a&gt; to see how we can help supercharge your AI projects! 🛠️🚀&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>pdf</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Developer workflows for building RAG-based LLM apps with systematic evaluation and iteration</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Wed, 04 Sep 2024 19:37:07 +0000</pubDate>
      <link>https://dev.to/inductor_ai/developer-workflows-for-building-rag-based-llm-apps-with-systematic-evaluation-and-iteration-1cd0</link>
      <guid>https://dev.to/inductor_ai/developer-workflows-for-building-rag-based-llm-apps-with-systematic-evaluation-and-iteration-1cd0</guid>
      <description>&lt;p&gt;We are thrilled to announce the release of Inductor’s newest open source LLM application starter template: &lt;strong&gt;Chat With Your PDFs&lt;/strong&gt; &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/chat_with_pdfs" rel="noopener noreferrer"&gt;(GitHub repo here)&lt;/a&gt;. This template is designed to empower developers to quickly build and ship a conversational AI bot that can interact with, answer questions about, and extract information from PDF documents. Whether you’re creating a knowledge base search bot, a customer support bot, a document review assistant, or simply want to build AI that can intelligently answer questions about a collection of PDFs, this starter template provides the foundation that you need.&lt;/p&gt;

&lt;p&gt;Much like our previous starter template releases, this template goes beyond just providing the basic structure for an LLM application. It includes an end-to-end developer workflow tailored to the iterative nature of building production-ready LLM applications. Key features of this template include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;Robust test suites&lt;/a&gt;: Ensure the accuracy and reliability of your PDF chatbot with systematic testing that covers various interaction scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;Automated experimentation and optimization&lt;/a&gt;: Quickly iterate on and refine your application by experimenting with different models, prompts, and configurations using Inductor &lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;hyperparameters&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;Instant prototyping playground&lt;/a&gt;: A secure, auto-generated environment that enables you to prototype and share your PDF chatbot with your team, fully integrated with your test and experimentation setups.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;Comprehensive logging&lt;/a&gt;: Monitor live traffic, understand user interactions, resolve issues, and continuously improve your application with integrated logging and observability.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.toANCHOR%20LINK%20TBD"&gt;PDF parsing and embedding&lt;/a&gt;: Leverage Unstructured for smart chunking and Sentence-Transformers for embedding generation, transforming static PDFs into dynamic, interactive knowledge sources. Complex documents, including those with images and tables, are processed and stored in a ChromaDB vector database for efficient retrieval based on user queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/chat_with_pdfs" rel="noopener noreferrer"&gt;Chat With Your PDFs&lt;/a&gt; template is your fast track to developing an LLM-powered bot that can handle complex document queries with ease. Whether you’re just starting or looking to scale, this template makes it simple to get up and running in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond consumer AI: why enterprises need custom LLM applications
&lt;/h2&gt;

&lt;p&gt;The rise of AI-powered knowledge assistants highlights the growing demand for intelligent systems that can seamlessly access and retrieve information. For example, products like ChatPDF, Liner, Eightify, and Phind leverage different data sources to enable chat assistants capable of querying and interacting with documents, web pages, videos, and research papers. These products enable users to extract key insights and summaries in real time, demonstrating the value of LLM-powered applications in various industries.&lt;/p&gt;

&lt;p&gt;These products exemplify how AI can enhance productivity by transforming static content into dynamic, interactive resources.  Yet, while they may work well for general consumer use cases, enterprises often require more customized and complex solutions to meet their unique demands.  For example, businesses dealing with legal documents, HR policy documents, or specialized training materials may need to handle deeper domain-specific queries, process multiple data formats, and integrate seamlessly with existing internal systems – demands that off-the-shelf products often don’t meet.&lt;/p&gt;

&lt;p&gt;This is where Inductor comes in. Enterprises often need flexible, tailor-made solutions, and Inductor provides a platform built specifically for LLM app development. Our tools enable developers to rapidly create production-ready AI applications that address unique business needs – for example for use cases such as internal knowledge management, legal compliance, or advanced research.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unlocking the power of PDF parsing and embedding
&lt;/h2&gt;

&lt;p&gt;At the heart of this starter template is its ability to transform static PDFs into dynamic, conversational experiences. By leveraging &lt;a href="https://github.com/Unstructured-IO/unstructured" rel="noopener noreferrer"&gt;Unstructured&lt;/a&gt; for smart chunking and Sentence-Transformers for embedding generation, the template ensures that even complex documents are easy to interact with.&lt;/p&gt;

&lt;p&gt;Once the PDF content is processed, it's stored in a &lt;a href="https://www.trychroma.com/" rel="noopener noreferrer"&gt;ChromaDB&lt;/a&gt; vector database. This allows the application to efficiently retrieve the most relevant sections based on user queries, ensuring that the LLM has the context it needs to generate accurate, meaningful responses. Using retrieval-augmented generation (RAG), this system seamlessly integrates with an LLM, such as &lt;a href="https://platform.openai.com/docs/models/gpt-4o" rel="noopener noreferrer"&gt;GPT-4o&lt;/a&gt;, to provide users with answers and deeper insights from the documents.&lt;/p&gt;

&lt;p&gt;Whether you're navigating technical manuals, financial reports, or academic papers, this capability turns static PDFs into a rich source of interactive knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supercharge your LLM app development with Inductor
&lt;/h2&gt;

&lt;p&gt;A key feature that sets the Chat With Your PDFs LLM starter template apart is its seamless integration with Inductor, a powerful platform built to streamline the full development and delivery lifecycle of your LLM application. From rapid prototyping to systematic testing and real-time monitoring, Inductor equips developers with the tools needed to build, refine, and deliver applications at every stage. Let’s dive into some of the standout features this integration provides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instantly generate a playground UI for experimentation
&lt;/h3&gt;

&lt;p&gt;Experimentation is at the heart of LLM application development, and with Inductor, you can spin up a fully interactive &lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;playground&lt;/a&gt; UI with a single command. This environment enables you to interactively test different queries, iterate on your application’s behavior, and share your Chat With Your PDFs app with teammates and subject matter experts – all without writing additional code. With your &lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;playground&lt;/a&gt;, you’ll have a flexible, no-code UI to rapidly iterate on and improve your LLM app. Just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;inductor playground app:chat_with_pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Systematically evaluate your app using test suites
&lt;/h3&gt;

&lt;p&gt;The Chat With Your PDFs template includes a built-in set of &lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;test suites&lt;/a&gt; designed to test your application’s performance across various scenarios, providing actionable feedback so that you can quickly identify and address any issues. Each test suite includes pre-configured test cases, quality measures, and hyperparameters for evaluation of your app. An example of the results of running an included test suite can be found &lt;a href="https://app.inductor.ai/test-suite/run/2616" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Whether you're validating how your LLM app handles complex queries or testing its response accuracy across different PDFs, Inductor enables you to rapidly do the evaluations that you need to improve your app efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rapidly refine your app with Inductor hyperparameters
&lt;/h3&gt;

&lt;p&gt;Inductor &lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;hyperparameters&lt;/a&gt; enable you to rapidly test and compare different configurations of your LLM application. Hyperparameters are fully integrated in the Inductor platform where they can be leveraged in playgrounds, test suites, and live A/B testing. Using hyperparameters in playgrounds allows for controlled and collaborative interactive experimentation. Using hyperparameters in &lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;test suites&lt;/a&gt; allows for systematic selection of optimized configurations.&lt;/p&gt;

&lt;p&gt;Here are some examples of hyperparameters included in the Chat With Your PDFs template:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;query_num_chat_messages&lt;/code&gt;: Adjusts the number of previous chat messages used in the query for retrieving relevant information from your vector database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;query_result_num&lt;/code&gt;: Controls how many results are retrieved from the vector database for each query.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;query_filter_out_program_messages&lt;/code&gt;: Determines whether or not chatbot-generated messages are filtered out from the query sent to the vector database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging &lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;hyperparameters&lt;/a&gt;, you can rapidly run in-depth experiments on multiple different configurations and identify the optimal setup for your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitor live executions in real time
&lt;/h3&gt;

&lt;p&gt;Understanding how your LLM app performs in real-world conditions is crucial. With Inductor’s &lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;live execution logging&lt;/a&gt;, every detail of your app’s execution is automatically recorded, including inputs, outputs, the specific text snippets retrieved by the RAG system, and more. This enables you to monitor your app’s performance in real time, giving you invaluable insights into user behavior, system efficiency, and areas for improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 What’s Next?
&lt;/h2&gt;

&lt;p&gt;Ready to build your own AI-powered PDF chatbot? Get started in minutes! Simply visit the &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/chat_with_pdfs" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, clone the Chat With Your PDFs starter template, and follow the easy steps to start developing your LLM application. It’s that simple! 💻✨&lt;/p&gt;

&lt;p&gt;Want to dive deeper? Explore more about &lt;a href="https://inductor.ai/" rel="noopener noreferrer"&gt;Inductor&lt;/a&gt; in our &lt;a href="https://app.inductor.ai/docs/index.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; or &lt;a href="https://inductor.ai/contact-us" rel="noopener noreferrer"&gt;book a demo&lt;/a&gt; to see how we can help supercharge your AI projects! 🛠️🚀&lt;/p&gt;

</description>
      <category>llm</category>
      <category>pdf</category>
      <category>chatgpt</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Build production-grade LLM apps faster and smarter with Inductor and MongoDB vector search</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Fri, 23 Aug 2024 21:32:55 +0000</pubDate>
      <link>https://dev.to/inductor_ai/build-production-grade-llm-apps-faster-and-smarter-with-inductor-and-mongodb-vector-search-4414</link>
      <guid>https://dev.to/inductor_ai/build-production-grade-llm-apps-faster-and-smarter-with-inductor-and-mongodb-vector-search-4414</guid>
      <description>&lt;p&gt;We’re thrilled to announce &lt;a href="https://cloud.mongodb.com/ecosystem/inductor" rel="noopener noreferrer"&gt;Inductor’s partnership with MongoDB&lt;/a&gt; and release our latest open-source LLM application starter template, designed for a documentation Q&amp;amp;A bot leveraging MongoDB vector search. &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/documentation_qa_mongodb_atlas" rel="noopener noreferrer"&gt;(GitHub repo here)&lt;/a&gt;. This template is designed to streamline the development process for RAG-based (Retrieval-Augmented Generation based) LLM applications by leveraging the powerful vector search functionality of MongoDB, along with a seamless Inductor integration for rapid prototyping, testing, experimentation, and monitoring.&lt;/p&gt;

&lt;p&gt;This starter template not only provides the foundational scaffolding for a RAG-based LLM application but also incorporates an end-to-end developer workflow optimized for rapid iterative development and delivery. With MongoDB vector search and Inductor, you can efficiently implement and optimize data retrieval, and ensure the quality and performance of your LLM application.&lt;/p&gt;

&lt;p&gt;Key components of this integrated workflow include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/" rel="noopener noreferrer"&gt;MongoDB Vector Search&lt;/a&gt;: MongoDB vector search enables fast, scalable data retrieval in order to bring your unique data into your LLM application via retrieval-augmented generation (RAG).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;Advanced Test Suites&lt;/a&gt;: Systematically test your LLM application to ensure quality and reliability.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#using-hyperparameters" rel="noopener noreferrer"&gt;Hyperparameter Optimization&lt;/a&gt;: Automate experimentation to rapidly find the optimal design for your LLM app, considering factors like model choice, prompt configuration, and retrieval augmentation.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;Auto-Generated Playground&lt;/a&gt;: Instantly and securely share a prototyping environment that integrates with test suites and hyperparameters for collaborative development.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#logging-intermediate-values" rel="noopener noreferrer"&gt;Integrated Logging&lt;/a&gt;: Monitor live traffic to understand usage, resolve issues, facilitate A/B testing, and continually improve your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This new template for developing an LLM-powered documentation Q&amp;amp;A bot takes minutes to set up. With minimal effort, you can integrate and configure the application to work with your own documentation, and benefit from MongoDB's efficient and scalable vector search capabilities.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of using MongoDB vector search for RAG-based LLM applications
&lt;/h2&gt;

&lt;p&gt;Integrating MongoDB vector search into your RAG-based (Retrieval-Augmented Generation based) LLM application offers a number of advantages - enabling efficient, scalable, production-grade data retrieval in order to easily and rapidly enable your LLM application to operate on your unique data.  Here are the key benefits:&lt;/p&gt;

&lt;h3&gt;
  
  
  Efficient large-scale data retrieval
&lt;/h3&gt;

&lt;p&gt;MongoDB's vector search capabilities enable the efficient handling of large datasets. By leveraging high-dimensional vectors to represent data, MongoDB allows for fast and accurate retrieval of relevant information, improving the performance of your LLM applications. This is particularly beneficial for RAG systems that require quick access to vast amounts of context to generate accurate responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved search accuracy
&lt;/h3&gt;

&lt;p&gt;With vector search, MongoDB can perform similarity searches that go beyond traditional keyword-based methods. This means your RAG LLM application can retrieve contextually relevant documents even if the exact keywords aren’t present. This leads to more accurate and meaningful responses, enhancing the overall user experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability and flexibility
&lt;/h3&gt;

&lt;p&gt;MongoDB's architecture is designed for scalability, allowing your application to grow seamlessly as your data and user base expand. Whether you're dealing with a few thousand documents or millions, MongoDB can scale to meet your needs without compromising performance. Additionally, its flexible schema supports a variety of data types, making it easier to integrate diverse data sources.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Turbocharge development speed with a seamless integration with Inductor
&lt;/h2&gt;

&lt;p&gt;You will see in this starter template that integrating Inductor ensures that MongoDB’s vector search capabilities are seamlessly combined with the ability to rapidly prototype, test, experiment, and monitor your LLM application.  This enables a rapid, streamlined progression from prototype to production, significantly speeding up time to market for your LLM applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced experimentation and optimization
&lt;/h3&gt;

&lt;p&gt;As seen in this starter template, MongoDB vector search can be easily combined with Inductor's &lt;a href="https://app.inductor.ai/docs/quickstart.html#using-hyperparameters" rel="noopener noreferrer"&gt;hyperparameter&lt;/a&gt; optimization tools. This enables you to rapidly and systematically  experiment with different retrieval configurations, model parameters, and data representations to find the optimal setup for your application. Such iterative development ensures that you can rapidly and continually improve your LLM application’s accuracy and efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;Inductor Custom Playgrounds&lt;/a&gt; enable you to auto-generate a powerful, instantly shareable playground for your LLM app with a single CLI command - and run it within your environment. Playgrounds provide a developer-first approach to prototype and iterate on LLM programs fast, as well as loop collaborators (including less-technical collaborators) into your development process, and get their feedback early and often.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rigorous and continuous evaluations
&lt;/h3&gt;

&lt;p&gt;An &lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;Inductor test suite&lt;/a&gt; is included with the documentation Q&amp;amp;A bot application to evaluate its performance and enable you to systematically test and improve. The included test suite consists of a set of test cases, each containing a set of input (i.e., argument) values for the LLM application and an example of an output value that should be considered high-quality or correct. The test suite also includes a set of quality measures specifying how to evaluate the output of the LLM application. Quality measures can be programmatic, human, or LLM-powered. Using Inductor test suites you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rapidly customize quality evaluation for your use case&lt;/li&gt;
&lt;li&gt;Auto-generate shareable UIs for human evals, and automate with rigorous LLM-powered evals&lt;/li&gt;
&lt;li&gt;Construct, evolve, and share test cases&lt;/li&gt;
&lt;li&gt;Automatically orchestrate test suite execution&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Robust monitoring and logging
&lt;/h3&gt;

&lt;p&gt;With &lt;a href="https://app.inductor.ai/docs/quickstart.html#log-intermediate-values" rel="noopener noreferrer"&gt;integrated logging capabilities&lt;/a&gt;, you can monitor search queries and retrieval performance in real time. This helps in identifying bottlenecks, understanding user behavior, and resolving issues quickly. The detailed logs also facilitate A/B testing, enabling data-driven decisions to further enhance your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous and cost-efficient improvement
&lt;/h3&gt;

&lt;p&gt;By leveraging Inductor’s LLM app development platform and MongoDB vector search within your RAG LLM application, you can achieve a higher level of performance, accuracy, and scalability. This powerful combination ensures that your LLM applications are well-equipped to handle complex queries and provide users with precise, contextually relevant responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  The documentation Q&amp;amp;A bot application
&lt;/h2&gt;

&lt;p&gt;The LLM-powered documentation Q&amp;amp;A bot leveraging MongoDB vector search &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/documentation_qa_mongodb_atlas" rel="noopener noreferrer"&gt;(GitHub repo here)&lt;/a&gt; is a RAG-based LLM application that answers questions using one or more Markdown documents as a data source to provide context. This starter template is intended for use cases like Q&amp;amp;A on developer documentation that have one or more Markdown documents on which you would like to provide a question-answering (Q&amp;amp;A) capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;To get started in minutes, visit the &lt;a href="https://github.com/inductor-hq/llm-toolkit/tree/main/starter_templates/documentation_qa_mongodb_atlas" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, clone the documentation Q&amp;amp;A starter template leveraging MongoDB vector search, and follow the simple steps provided to start systematically developing your LLM application.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>mongodb</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Open-sourcing Inductor LLM app starter templates: An out-of-the-box systematic approach for developing LLM applications</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Wed, 07 Aug 2024 18:13:00 +0000</pubDate>
      <link>https://dev.to/inductor_ai/open-sourcing-inductor-llm-app-starter-templates-an-out-of-the-box-systematic-approach-for-developing-llm-applications-2f6m</link>
      <guid>https://dev.to/inductor_ai/open-sourcing-inductor-llm-app-starter-templates-an-out-of-the-box-systematic-approach-for-developing-llm-applications-2f6m</guid>
      <description>&lt;p&gt;We are excited to announce that we’ve open-sourced Inductor’s first LLM application starter template &lt;a href="https://github.com/inductor-hq/llm-toolkit" rel="noopener noreferrer"&gt;(GitHub repo here)&lt;/a&gt;. This template will make it easy for you to get started with a systematic and iterative development process for building and shipping a RAG-based LLM application. Most templates show you how to get started by providing the simple scaffolding for an LLM application. Beyond that, this template also includes an end-to-end developer workflow optimized for the iterative development required to confidently and efficiently develop a production-grade LLM application. Some key components of the integrated developer workflow are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test suites to systematically test the LLM application and ensure its quality.&lt;/li&gt;
&lt;li&gt;Hyperparameters to automate experimentation and rapidly find the LLM app design that delivers the results that you need, across choice of model, prompt, retrieval augmentation, and more.&lt;/li&gt;
&lt;li&gt;An auto-generated playground that can be instantly and securely shared for prototyping, and that integrates with test suites and hyperparameters.&lt;/li&gt;
&lt;li&gt;Integrated logging for monitoring your live traffic in order to understand usage, resolve issues, facilitate A/B testing, and further improve the application. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first template that we are now releasing is a getting started template for developing an LLM-powered documentation Q&amp;amp;A bot. It takes just minutes to get started, and you can easily integrate and configure the application to work with your own sources of documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why we’re building LLM application starter templates
&lt;/h3&gt;

&lt;p&gt;As the demand for LLM-powered applications and product features grows, developers and their teams find themselves in need of a comprehensive and streamlined approach to their end-to-end development lifecycle. In the world of traditional application development, there is a well-established development lifecycle and a clear methodology for testing quality. Developers can rely on a structured process that guides them from concept to production, ensuring robust and reliable applications. However, when it comes to developing applications with large language models (LLMs), the path is far less straightforward. LLM application development requires a more experimental and iterative approach, where developers must continually refine and optimize their applications to achieve the desired performance.&lt;/p&gt;

&lt;p&gt;This iterative nature presents several challenges, and developers need ways to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rapidly prototype with stakeholders&lt;/li&gt;
&lt;li&gt;Systematically evaluate their LLM application or feature&lt;/li&gt;
&lt;li&gt;Identify and implement improvements&lt;/li&gt;
&lt;li&gt;Observe behavior in production and take appropriate action&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without the right tools and workflows, navigating this process is time-consuming and complex.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our solution: jumpstart systematic LLM app development
&lt;/h3&gt;

&lt;p&gt;Enter Inductor’s LLM application starter templates. Designed to address the unique challenges of LLM application development, each template is open-source and provides an easy path to get started quickly and efficiently. Each template includes the necessary scaffolding to facilitate rapid prototyping as well as streamline the progression from prototype to production.&lt;/p&gt;

&lt;p&gt;Here’s what you can expect from each Inductor LLM app starter template:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application scaffolding: A robust foundation for your LLM application, ensuring you have all the essential components to build upon.&lt;/li&gt;
&lt;li&gt;Out-of-the-box UI for rapid prototyping: With a single CLI command, you can start an auto-generated and securely shareable user interface that enables you to quickly prototype and gather feedback from stakeholders, via Inductor playgrounds.&lt;/li&gt;
&lt;li&gt;Test suite scaffolding for easy evaluation-driven development: Each template includes an Inductor test suite that can be customized for your particular use case.&lt;/li&gt;
&lt;li&gt;Experimentation scaffolding for systematic improvement: Each template includes built-in touchpoints for rapid and automated experimentation, which can be used with Inductor to automate and orchestrate testing of multiple different app variants in order to further improve your app.&lt;/li&gt;
&lt;li&gt;Production logging integration for easy observability: Pre-built logging integration to maintain visibility and monitor your application’s performance in a production environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Inductor platform, if used in conjunction with each starter template, provides the tools and systems needed to bring successful production-grade LLM applications to market:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zbnkbdls--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b3b92b9b4c8f50ffcdef12_66b3b929507e673057b9c1bd_Screenshot%2525202024-08-07%252520at%25252012.12.49%2525E2%252580%2525AFPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zbnkbdls--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b3b92b9b4c8f50ffcdef12_66b3b929507e673057b9c1bd_Screenshot%2525202024-08-07%252520at%25252012.12.49%2525E2%252580%2525AFPM.png" alt="Without Inductor vs With Inductor" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation Q&amp;amp;A bot starter template
&lt;/h3&gt;

&lt;p&gt;The LLM-powered documentation Q&amp;amp;A bot &lt;a href="https://github.com/inductor-hq/llm-toolkit" rel="noopener noreferrer"&gt;(GitHub repo here)&lt;/a&gt; is a RAG-based LLM application that answers questions using one or more Markdown documents as a data source to provide context. This starter template is intended for use cases like Q&amp;amp;A on developer documentation that have one or more Markdown documents on which you would like to provide a question-answering (Q&amp;amp;A) capability.&lt;/p&gt;

&lt;h4&gt;
  
  
  App architecture
&lt;/h4&gt;

&lt;p&gt;The application is implemented in Python and includes two main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An ETL (Extract, Transform, Load) process that parses, chunks, and embeds the relevant Markdown files and populates a vector database.  (By default, the starter template uses an included sample Markdown file, and it can easily be configured to instead utilize your own Markdown files.)&lt;/li&gt;
&lt;li&gt;The main application entrypoint, which takes a question as input, retrieves relevant content from the vector database, and uses an LLM to generate an answer to the question.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Specifically, the ETL process ingests one or more Markdown files, splits them into chunks by Markdown sections, and converts each section to an embedding using Sentence-Transformers' all-MiniLM-L6-v2 model (the default model for Chroma).  The embeddings, along with their associated chunks and metadata, are stored locally in a Chroma vector database.  The app can also easily be modified to instead utilize a different vector database.&lt;/p&gt;

&lt;p&gt;The main application entrypoint consists of a function that takes a question as input, queries the vector database to retrieve the most relevant Markdown content based on the question, and then uses the OpenAI “gpt-4o” model to generate and return an answer to the question.  The app can easily be modified to utilize a different LLM or LLM provider.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scaffolding for an effective development workflow: Inductor integration
&lt;/h4&gt;

&lt;p&gt;Going from prototype to production with an LLM application for your particular use case requires iterative testing, experimentation, and collaboration, as well as live observability so that you are not flying blind when you ship.  To enable doing this rapidly and reliably, Inductor provides a platform for prototyping, evaluating, improving, and observing your LLM app.  This starter template includes pre-built scaffolding that leverages Inductor’s capabilities to enable you to iterate quickly, ship reliably, and collaborate effectively.&lt;/p&gt;

&lt;p&gt;Key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;Test Suites&lt;/a&gt;: Easily, rigorously, and continuously test your LLM application with Inductor’s test suites and CLI, to systematically find shortcomings in behavior, accuracy, or cost-efficacy.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#use-hyperparameters-to-turbocharge-your-iteration" rel="noopener noreferrer"&gt;Hyperparameters&lt;/a&gt;: Dramatically accelerate your experimentation and optimization process with Inductor hyperparameters, to rapidly find the LLM app design that eliminates any shortcomings.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;Playgrounds&lt;/a&gt;: Quickly prototype and collaborate using a playground that is instantly auto-generated for your LLM app, can be easily and securely shared, and integrates seamlessly with your test suites and hyperparameters.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;Logging&lt;/a&gt;: Gain deep insights into usage, detect and resolve issues, and continuously improve your application with Inductor’s rich, automated production logging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these features, Inductor empowers you to build, refine, and deliver your LLM applications more effectively than ever before.&lt;br&gt;
‍&lt;/p&gt;

&lt;h4&gt;
  
  
  Test suites
&lt;/h4&gt;

&lt;p&gt;An &lt;a href="https://app.inductor.ai/docs/quickstart.html#test-suites" rel="noopener noreferrer"&gt;Inductor test suite&lt;/a&gt; is included alongside the documentation Q&amp;amp;A bot application to evaluate its performance and enable you to systematically test and improve. The included test suite consists of a set of test cases, each containing a set of input (i.e., argument) values for your LLM application and an example of an output value that should be considered high-quality or correct. The test suite also includes a set of quality measures specifying how to evaluate the output of your LLM program. Quality measures can be programmatic, human, or LLM-powered. Using Inductor test suites you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rapidly customize quality evaluation for your use case&lt;/li&gt;
&lt;li&gt;Auto-generate shareable UIs for human evals, and automate with rigorous LLM-powered evals&lt;/li&gt;
&lt;li&gt;Construct, evolve, and share test cases
Automatically orchestrate test suite execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Within the test suite that is part of this starter template, the included set of test cases can be split into the following categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Common questions with examples of high quality answers or target outputs&lt;/li&gt;
&lt;li&gt;Unanswerable questions&lt;/li&gt;
&lt;li&gt;Out of scope questions&lt;/li&gt;
&lt;li&gt;Malicious questions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, test cases can easily be added by modifying the test_cases.yaml file within the starter template.&lt;/p&gt;

&lt;p&gt;Along with these test cases are quality measures. This template uses &lt;a href="https://app.inductor.ai/docs/quickstart.html#add-an-llm-powered-quality-measure-to-your-test-suite" rel="noopener noreferrer"&gt;LLM-powered quality measures&lt;/a&gt; to assess:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can the question be answered with the provided context?&lt;/li&gt;
&lt;li&gt;Is the target output contained in the answer provided?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inductor enables easily running the test suite and viewing its results, an example of which are as follows:&lt;/p&gt;

&lt;p&gt;‍&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WPL7XSA5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b4faf2f5735f6662cd9fb7_66b4fad75d8f1e7412bf4d91_image1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WPL7XSA5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b4faf2f5735f6662cd9fb7_66b4fad75d8f1e7412bf4d91_image1.png" alt="Test suites" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Hyperparameters
&lt;/h4&gt;

&lt;p&gt;Improving an LLM application requires iterative experimentation in order to test different variants of your app’s design in order to find the design that yields the desired app behavior and quality. For example, this can include changing the prompt content, adjusting the prompt construction, selecting different models, tweaking model settings (e.g., temperature), and refining retrieval augmentation techniques. &lt;a href="https://app.inductor.ai/docs/quickstart.html#using-hyperparameters" rel="noopener noreferrer"&gt;Inductor hyperparameters&lt;/a&gt; enable you to systematically and rapidly test and evaluate different configurations of your LLM application. This capability helps you assess the quality and cost-effectiveness of various setups, enabling rapid experimentation while maintaining organization and rigor.&lt;/p&gt;

&lt;p&gt;In summary, hyperparameters enable you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate your experimentation in order to rapidly find the LLM app design that delivers the results that you need, across choice of model, prompt, retrieval augmentation, or anything else.&lt;/li&gt;
&lt;li&gt;Automatically version and track all experiment results.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This starter template comes pre-built with two key hyperparameters (and you can also easily add more based on your needs):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rephrasing vector database query:&lt;/strong&gt; This hyperparameter controls whether to use the original user question to query the vector database or to rephrase the question to generate a more informative and relevant vector database query. Rephrasing can incorporate additional keywords and phrases to improve retrieval accuracy. However, this strategy may introduce trade-offs, such as increased latency and higher costs due to the additional LLM API call required for rephrasing. By using this hyperparameter, you can easily experiment with and evaluate the effectiveness of using the original versus the rephrased question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Number of contextual results retrieved from the vector database:&lt;/strong&gt; This hyperparameter sets the number of results to be retrieved from the vector database. Adjusting this setting allows you to control the breadth of information retrieved, which can impact the comprehensiveness and relevance of the responses provided by your documentation Q&amp;amp;A bot.&lt;/p&gt;

&lt;p&gt;When you run the test suite included in the starter template, Inductor will automatically run and evaluate your LLM app on all included test cases for all combinations of values of the hyperparameters, so that you can easily and rapidly identify the best hyperparameter configuration for your app (by simply clicking on the “Hparam summary” button seen in the screenshot above).  Additionally, as seen in the next section below, you can also interactively experiment with different hyperparameter values via a Customer Playground.&lt;/p&gt;

&lt;p&gt;Therefore, by leveraging hyperparameters, you can rapidly evolve your LLM application to achieve the desired balance between performance, accuracy, and cost, ultimately enhancing the user experience of your documentation Q&amp;amp;A bot in a cost-efficient manner.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h4&gt;
  
  
  Playgrounds
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://app.inductor.ai/docs/quickstart.html#start-a-custom-playground" rel="noopener noreferrer"&gt;Inductor Custom Playgrounds&lt;/a&gt; enable you to auto-generate a powerful, instantly shareable playground for your LLM app with a single CLI command - and run it within your environment. Playgrounds provide a developer-first approach to prototype and iterate on LLM programs fast, as well as loop collaborators (including less-technical collaborators) into your development process, and get their feedback early and often.&lt;/p&gt;

&lt;p&gt;In particular, with Custom Playgrounds you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto-generate a custom playground UI for your LLM app&lt;/li&gt;
&lt;li&gt;Run securely in your environment, with your code and data sources&lt;/li&gt;
&lt;li&gt;Share instantly&lt;/li&gt;
&lt;li&gt;Iteratively develop test suites for systematic evaluation and improvement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All interactive executions of your LLM program in your playground are automatically logged, so that you can easily replay them, and never lose your work.&lt;/p&gt;

&lt;p&gt;Inductor enables you to start a playground for your documentation Q&amp;amp;A bot with a single CLI command.  An example of such a playground is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QMnydjt7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b4faf2f5735f6662cd9fbe_66b4fae2f0acc633b8763b77_image2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QMnydjt7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b4faf2f5735f6662cd9fbe_66b4fae2f0acc633b8763b77_image2.png" alt="Playgrounds" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers and domain experts can interactively experiment with different combinations of hyperparameters in playgrounds to rapidly prototype and collaborate:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g38xR6RA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b4faf2f5735f6662cd9fb4_66b4faeac2612638deae0c05_image3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g38xR6RA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/652478ec9aefaf7481a1b63e/66b4faf2f5735f6662cd9fb4_66b4faeac2612638deae0c05_image3.png" alt="Collaborate" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inductor additionally makes it easy to add active and previous work performed in playgrounds to test suites, enabling you to rapidly transition work from prototyping to systematic evaluation, and ultimately to production, by improving test suites and evaluating changes across essential points of testing and validation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Live logging and monitoring
&lt;/h4&gt;

&lt;p&gt;When your LLM program is running in production, live monitoring becomes essential for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensuring intended behavior: Continuous monitoring helps confirm that your application is performing as expected and delivering useful, accurate results.&lt;/li&gt;
&lt;li&gt;Issue detection and resolution: By keeping an eye on real-time operations, you can quickly identify and fix any emerging issues before they escalate.&lt;/li&gt;
&lt;li&gt;Usage analysis for improvement: Understanding how users actually interact with your application enables you to gather valuable insights and make data-driven enhancements.&lt;/li&gt;
&lt;li&gt;Feedback loop to development: Insights and issues identified via live monitoring can be fed back into the development process, enabling continuous improvement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To facilitate this, the starter template includes the ability (via inclusion of the &lt;a href="https://app.inductor.ai/docs/quickstart.html#monitor-your-llm-program-on-live-traffic" rel="noopener noreferrer"&gt;Inductor decorator&lt;/a&gt;) to use Inductor to automatically log multiple elements of the LLM application’s behavior. This automatically enables you to view and analyze many aspects of the LLM app’s behavior, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User input:&lt;/strong&gt; Captures the queries inputted by users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM output:&lt;/strong&gt; Logs the responses generated by the app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency:&lt;/strong&gt; Tracks response times to measure and ensure responsiveness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG system elements:&lt;/strong&gt; Monitors components of the retrieval-augmented generation (RAG) system, such as the set of text snippets retrieved from the vector database in order to respond to any given query.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this comprehensive live monitoring capability, you can maintain high standards of reliability, performance, and user satisfaction for your LLM applications.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  What next?
&lt;/h3&gt;

&lt;p&gt;To get started in minutes, visit the &lt;a href="https://github.com/inductor-hq/llm-toolkit" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, clone the documentation Q&amp;amp;A starter template, and follow the simple steps provided to start systematically developing your LLM application.&lt;/p&gt;

&lt;p&gt;You can also learn more about &lt;a href="https://inductor.ai/" rel="noopener noreferrer"&gt;Inductor&lt;/a&gt; by visiting our &lt;a href="https://app.inductor.ai/docs/index.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; or &lt;a href="https://inductor.ai/contact-us" rel="noopener noreferrer"&gt;booking a demo&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>chatgpt</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Inductor Custom Playgrounds: A developer-first way to experiment and collaborate on LLM app development</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Tue, 25 Jun 2024 18:48:03 +0000</pubDate>
      <link>https://dev.to/inductor_ai/inductor-custom-playgrounds-a-developer-first-way-to-experiment-and-collaborate-on-llm-app-development-3ja3</link>
      <guid>https://dev.to/inductor_ai/inductor-custom-playgrounds-a-developer-first-way-to-experiment-and-collaborate-on-llm-app-development-3ja3</guid>
      <description>&lt;p&gt;The only way to build a high-quality LLM application is to get hands-on, iterate, and experiment your way to success, powered by collaboration and data; it is important to then also do rigorous evaluation.&lt;/p&gt;

&lt;p&gt;At Inductor, we’re building the tools that developers need to do this, so that you can build and ship production-ready LLM apps far more quickly, easily, and systematically – whether you’re creating an AI chatbot, a documentation assistant, a text-to-SQL feature, or something else powered by LLMs.  We’re now rolling out a new capability that we’re super-excited about: Custom Playgrounds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.inductor.ai/signup" rel="noopener noreferrer"&gt;Auto-generate your custom playground by signing up here!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inductor Custom Playgrounds enable you to auto-generate a powerful, instantly shareable playground for your LLM app with a single CLI command - and run it within your environment.&lt;/strong&gt;  This makes it easy to loop other (even non-technical) team members into your development process, and also accelerate your own iteration speed.&lt;/p&gt;

&lt;p&gt;By leveraging Custom Playgrounds, you can turbocharge your development process, reduce time to market, and create more effective LLM applications and features. Watch our demo video to learn more or read on below!&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/967057001" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use Custom Playgrounds?
&lt;/h2&gt;

&lt;p&gt;Inductor's Custom Playgrounds are purpose-built for developers’ needs, and offer significant advantages over traditional LLM playgrounds with respect to productivity, usability, and collaboration. Custom Playgrounds&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrate directly with your code and auto-generate customized UI tailored to your specific LLM application.&lt;/li&gt;
&lt;li&gt;Run directly against your environment, facilitating use of private data and internal systems.&lt;/li&gt;
&lt;li&gt;Enable robust, secure collaboration –  empowering teams to share work, collect feedback, and leverage collective expertise directly within the playground (e.g., for prompt engineering and more).&lt;/li&gt;
&lt;li&gt;Accelerate development through features like UI auto-generation, hot-reloading, auto-logging, and integrated test suite management – streamlining the iteration process and enabling rapid prototyping and systematic evaluation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These enhancements make Custom Playgrounds a more efficient, flexible, and powerful tool for developing and refining LLM applications or features compared to traditional LLM playgrounds and DIY interfaces.&lt;/p&gt;

&lt;p&gt;⚠️ Creating DIY capabilities demands greater effort, entails higher risk, and results in increased long-term total cost of ownership (TCO).&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started with one command
&lt;/h2&gt;

&lt;p&gt;Simply execute a single Inductor CLI command to auto-generate a playground UI for your LLM application. Run securely in your environment, using your data, systems, and programmatic logic.&lt;/p&gt;

&lt;p&gt;To auto-generate a playground for your LLM app, just execute the following commands in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt; $ pip install inductor &lt;br&gt;
$ inductor playground my.module:my_function &lt;/code&gt;&lt;/p&gt;

&lt;p&gt;where “my.module:my_function” is the fully qualified name of a Python function that is the entrypoint to your LLM app.  (No modifications to your code required!)&lt;/p&gt;

&lt;p&gt;If you’re building a multi-turn chat app, you can also add a single inductor.ChatSession type annotation to your LLM app’s entrypoint function before you run the playground CLI command to also get chat-specific capabilities.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://app.inductor.ai/docs/quickstart.html" rel="noopener noreferrer"&gt;our docs&lt;/a&gt; for more information about how to use Custom Playgrounds.&lt;/p&gt;

&lt;p&gt;Once generated, your playground enables you to&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instantly share your LLM app with technical and/or non-technical colleagues to collect feedback.&lt;/li&gt;
&lt;li&gt;Interact with and evolve your LLM app (with hot-reloading, logging and replay of your interaction history, hyperparameters, and visibility into execution internals).&lt;/li&gt;
&lt;li&gt;Easily turn your interactions into repeatable test suites for systematic evaluation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get started for free by signing up &lt;a href="https://app.inductor.ai/signup" rel="noopener noreferrer"&gt;here&lt;/a&gt; or simply running the terminal commands above, to instantly generate a custom playground and turbocharge your LLM app development!&lt;/p&gt;

</description>
      <category>llms</category>
      <category>tooling</category>
      <category>ai</category>
      <category>testing</category>
    </item>
    <item>
      <title>Newsletter - April Edition</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Wed, 03 Apr 2024 21:17:16 +0000</pubDate>
      <link>https://dev.to/inductor_ai/newsletter-april-edition-259c</link>
      <guid>https://dev.to/inductor_ai/newsletter-april-edition-259c</guid>
      <description>&lt;h3&gt;
  
  
  Announcements
&lt;/h3&gt;

&lt;p&gt;We’ve gone live with our new developer tool for evaluating, improving, and observing your LLM applications – both during development and in production.  For teams who are serious about shipping production-ready LLM apps (i.e., that are high-quality, trustworthy, and cost-effective), we’ve built Inductor to help you do so much more rapidly, easily, and reliably.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://inductor.ai/blog/product-demo-with-inductors-founder"&gt;Watch our brief demo video&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Check out our &lt;a href="https://inductor.ai/blog/introducing-inductor-llm-developer-tool"&gt;announcement blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow us on &lt;a href="https://www.linkedin.com/company/inductor-ai/"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://www.producthunt.com/products/inductor"&gt;Product Hunt&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Interested in early access to Inductor? &lt;a href="https://inductor.ai/request-access"&gt;Request access here.&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  A note from our founder
&lt;/h3&gt;

&lt;p&gt;The developer community has been building web applications for well over 25 years.  During that time, we’ve created a set of highly effective practices and tools for building web apps productively, and for reliably delivering great experiences to users.  This has been critical to web apps becoming a ubiquitous way in which developers (and businesses) innovate and deliver value to their users. &lt;/p&gt;

&lt;p&gt;We as a community are now at the beginning of that journey for LLM-powered applications – a different (and much newer) class of applications that are poised to have at least as much impact as web apps, if not much more.  The underlying tech (i.e., large language models) is powerful, but it remains challenging and time-consuming to build and ship LLM apps that reliably deliver high-quality, high-value experiences to users.  We’ve seen too many teams struggle to do this (and have experienced this pain ourselves).&lt;/p&gt;

&lt;p&gt;At Inductor, we’re working to solve this problem.  The work required to build a production-ready LLM app differs in fundamental ways from the work of building other types of applications, such as web apps.  In particular, LLM applications require iterative development driven by experimentation and evaluation, as they cannot be a priori written to guarantee desired behavior (e.g., LLMs’ inputs cannot be designed a priori to guarantee desired output behavior).  The only way to build a high-quality LLM application is to iterate and experiment your way to success, powered by data and rigorous evaluation; it is essential to then also observe and understand live usage to detect issues and fuel further improvement.  Today, the work of doing all of this is too often slow and painful.&lt;/p&gt;

&lt;p&gt;We’ve been building Inductor to address this.  We’re excited to say that we’ve recently gone live with our new product, and Inductor is already being used by customers to build and deliver production-ready LLM apps.  And, we’re working on a whole lot more – we’ll keep you in the loop along the way.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Release Notes
&lt;/h3&gt;

&lt;p&gt;We’ve been busy building – below are some of the exciting new capabilities that we’ve recently added to Inductor (see our &lt;a href="https://inductor.ai/blog/product-demo-with-inductors-founder"&gt;demo video&lt;/a&gt; for an overview of Inductor’s features beyond the below):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Inductor’s application-level Hyperparameters to automatically screen across different versions of your LLM app in order to rapidly determine which is best for your needs.  This enables you to super-quickly and systematically test variants of any facet of your LLM app, such as different models, prompts, RAG strategies, or pre- or post-processing approaches.&lt;/li&gt;
&lt;li&gt;Add LLM-powered quality measures to your test suites (or live traffic) to automate and scale up human-style evaluations.  Inductor automatically determines the degree of alignment between your LLM evals and any corresponding human evals, in order to ensure that your LLM-powered evaluations rigorously reflect your human definitions of quality.&lt;/li&gt;
&lt;li&gt;Use Inductor’s rich suite of sharing functionality to securely collaborate with team members or other stakeholders to get feedback and analyze results.&lt;/li&gt;
&lt;li&gt;Run quality measures (of any type - function, human, or LLM) on live executions to automatically and continuously assess quality on your live traffic.  Filter live executions by quality measures to rapidly diagnose issues and areas for improvement.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>testing</category>
      <category>developer</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Product Demo with Inductor's Founder</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Thu, 14 Mar 2024 18:28:36 +0000</pubDate>
      <link>https://dev.to/inductor_ai/product-demo-with-inductors-founder-5k3</link>
      <guid>https://dev.to/inductor_ai/product-demo-with-inductors-founder-5k3</guid>
      <description>&lt;p&gt;In this quick demo video, we explain how Inductor addresses some of the most critical problems that teams face when working to ship LLM applications: evaluating, monitoring, and systematically improving LLM app quality and cost-effectiveness.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/922982242" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We’ve carefully crafted Inductor for LLM app developers and their teams as a combination of a CLI, API, and web UI to provide a fantastic developer experience for iterating and shipping fast. Inductor is packed with powerful features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test suites&lt;/li&gt;
&lt;li&gt;Quality measures&lt;/li&gt;
&lt;li&gt;Hyperparameters&lt;/li&gt;
&lt;li&gt;Human evals&lt;/li&gt;
&lt;li&gt;LLM-powered evals&lt;/li&gt;
&lt;li&gt;Live production monitoring&lt;/li&gt;
&lt;li&gt;Auto-logging&lt;/li&gt;
&lt;li&gt;Collaboration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Turbocharge experimentation with easily defined hyperparameters and inject them into your LLM app to test anything: prompts, models, RAG strategies, or anything else. Easily monitor live traffic with just one line of code – detect and resolve issues, analyze usage, and seamlessly feed back into your development process.&lt;/p&gt;

&lt;p&gt;We’ve designed Inductor to make it easy to get started quickly.  As soon as you have an account, it takes just a few minutes to create your first test suite and run it on your LLM app – with zero code modifications. To log and monitor your live traffic, simply add one line of code to your app.&lt;/p&gt;

&lt;p&gt;If you’re interested in trying Inductor for free, &lt;a href="https://inductor.ai/request-access"&gt;request access here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>tooling</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Inductor is live on Product Hunt 🚀</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Mon, 04 Mar 2024 17:33:22 +0000</pubDate>
      <link>https://dev.to/inductor_ai/inductor-is-live-on-product-hunt-4pfk</link>
      <guid>https://dev.to/inductor_ai/inductor-is-live-on-product-hunt-4pfk</guid>
      <description>&lt;p&gt;We’re excited to introduce Inductor on Product Hunt! 🎉&lt;/p&gt;

&lt;p&gt;Check out our &lt;a href="https://www.google.com/url?q=https://www.producthunt.com/posts/inductor&amp;amp;sa=D&amp;amp;source=docs&amp;amp;ust=1709577116223371&amp;amp;usg=AOvVaw3z04XBAUPxqBC3Zqr8LEKo"&gt;launch&lt;/a&gt; to view our new &lt;a href="https://www.youtube.com/watch?v=J4MY6ivyhNA"&gt;demo video&lt;/a&gt;, learn more about Inductor, ask questions, or provide feedback. If you like what you see, please upvote and share so that more LLM app developers can ship production-ready LLM apps far faster and more easily with Inductor. &lt;/p&gt;

&lt;p&gt;Inductor is a developer tool for evaluating, monitoring, and improving LLM applications – to enable you to build and ship production-ready LLM applications far faster and more easily.&lt;/p&gt;

&lt;p&gt;We’ve carefully crafted Inductor for LLM app developers as a combination of a CLI, API, and web UI to provide a fantastic developer experience for iterating and shipping fast. Inductor is packed with powerful features like: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test suites&lt;/li&gt;
&lt;li&gt;Quality measures&lt;/li&gt;
&lt;li&gt;Hyperparameters&lt;/li&gt;
&lt;li&gt;Human evals&lt;/li&gt;
&lt;li&gt;LLM-powered evals&lt;/li&gt;
&lt;li&gt;Live production monitoring&lt;/li&gt;
&lt;li&gt;Auto-logging&lt;/li&gt;
&lt;li&gt;Collaboration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rigorously evaluate your LLM app both offline and online by rapidly applying human, programmatic, and LLM-powered quality measures.  Turbocharge experimentation with easily defined hyperparameters, and use them in your LLM app to test anything: prompts, models, RAG strategies, or anything else. Easily monitor live traffic with just one line of code – detect and resolve issues, analyze usage, and seamlessly feed back into your development process. &lt;/p&gt;

&lt;p&gt;We’ve designed Inductor to make it easy to get started quickly.  As soon as you have an account, it takes just a few minutes to create your first test suite and run it on your LLM app using our CLI – with zero code modifications. To log and monitor your live traffic, simply add one line of code to your app.&lt;/p&gt;

&lt;p&gt;If you’re interested in trying Inductor for free, request access &lt;a href="https://inductor.ai/request-access"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Follow us to stay in the loop! We'll be launching new features and sharing best practices for systematically shipping production-ready LLM apps. &lt;/p&gt;

</description>
      <category>testing</category>
      <category>tooling</category>
      <category>llms</category>
      <category>ai</category>
    </item>
    <item>
      <title>Introducing Inductor: Ship Production-Ready LLM Apps Dramatically Faster and More Easily</title>
      <dc:creator>Natalie Fagundo</dc:creator>
      <pubDate>Fri, 16 Feb 2024 17:43:04 +0000</pubDate>
      <link>https://dev.to/inductor_ai/introducing-inductor-ship-production-ready-llm-apps-dramatically-faster-and-more-easily-4ji</link>
      <guid>https://dev.to/inductor_ai/introducing-inductor-ship-production-ready-llm-apps-dramatically-faster-and-more-easily-4ji</guid>
      <description>&lt;p&gt;We've built a new developer tool that makes it far easier to systematically evaluate, monitor, and improve LLM applications – both during development and in production.&lt;/p&gt;

&lt;p&gt;We’re excited to introduce the new product that we’ve been building at Inductor!   Having seen too many teams experience (and having experienced ourselves) the pain of going from an LLM-powered demo to production-ready LLM-powered application, we’ve built a developer tool that addresses the most critical problems that teams face when working to build and ship LLM applications.  Read on to learn more!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we built Inductor
&lt;/h2&gt;

&lt;p&gt;LLMs unlock fantastic new possibilities, and make it easy to create compelling demoware.  But, going from an idea or demo to an LLM application that is actually production-ready (i.e., high quality, safe, cost-effective) is painful and time-consuming.&lt;/p&gt;

&lt;p&gt;Doing so requires grappling with critical questions like&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“How good (i.e., high-quality, safe, cost-effective) is our LLM application?”&lt;br&gt;
“How can we improve it? Are we actually improving?”&lt;br&gt;
“Is our LLM app ready to ship? Does it produce high-quality results sufficiently reliably?”&lt;br&gt;
“How is our LLM app behaving on live traffic?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In order to build and ship a production-ready LLM application, teams need to answer these questions repeatedly as they iteratively develop, as well as deploy and run live.  However, today, answering these questions for LLM apps is painful and time-consuming.&lt;/p&gt;

&lt;p&gt;Inductor solves this problem by supplying you with the right workflows and tools to answer these questions rapidly, easily, and systematically.  In turn, this enables you to deliver high-quality, safe, and cost-effective LLM apps far more rapidly and easily.&lt;/p&gt;

&lt;p&gt;With Inductor, you can&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iterate and ship faster&lt;/strong&gt; – &lt;em&gt;so that you can increase your productivity and get to market faster.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Inductor makes it easy to continuously test and evaluate as you develop, so that you always know your LLM app’s quality and how it is changing.  Inductor also enables you to systematically make changes to improve quality and cost-effectiveness by rapidly testing different app variants and actionably analyzing your LLM app’s behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliably deliver high-quality results&lt;/strong&gt; – &lt;em&gt;so that you can ship confidently and safely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Inductor enables you to rigorously assess your LLM app’s behavior before you deploy, in order to ensure quality and cost-effectiveness when you’re live. You can then use Inductor to easily monitor your live traffic to detect and resolve issues, analyze actual usage in order to improve, and seamlessly feed back into your development process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaborate easily and efficiently&lt;/strong&gt; – &lt;em&gt;so that you can loop in your team to help build and ship.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Inductor makes it easy for engineering and other roles to collaborate, for example to get critical human feedback from non-engineering stakeholders (e.g., PM, UX, or subject matter experts) to ensure that your LLM app is user-ready.  With Inductor, you no longer need to pass around unwieldy spreadsheets that rapidly become outdated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Carefully crafted for developers and their teams
&lt;/h2&gt;

&lt;p&gt;Teams need to go through three key workflows in order to build and ship production-ready (i.e., high-quality, safe, cost-effective) LLM applications:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Construct, run, and evolve test suites&lt;/strong&gt; – in order to systematically assess your LLM app’s quality, safety, and cost-effectiveness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iteratively experiment and analyze results&lt;/strong&gt;– in order to actionably understand your LLM app’s behavior, systematically improve, and rigorously decide when you are ready to ship.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor in production and feed back to dev&lt;/strong&gt; – in order to understand how your users are actually using your LLM app, address any issues, determine where you need to improve, and feed these insights back into your development process.&lt;/p&gt;

&lt;p&gt;We’ve carefully crafted Inductor to make it easy and seamless to do these things in your existing development environment, by using Inductor’s CLI, API, and web UI.  Inductor is packed with a powerful, easy-to-use set of capabilities purpose-built for the needs of teams working to build and ship LLM apps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1pzeisv0272p5cnoqye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1pzeisv0272p5cnoqye.png" alt="Image description" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Test suites
&lt;/h2&gt;

&lt;p&gt;Inductor’s test suites make systematic, continuous evaluation easy – both during iterative development, and prior to deployment (to ensure quality and cost-effectiveness when you’re live).  Define test cases and quality measures easily via YAML or our Python API.  Easily span the full spectrum of automated to human evaluation by using Inductor’s programmatic, human, or LLM-powered quality measures – with a great workflow for ensuring that your LLM-powered quality measures are rigorously calibrated to your human-defined evaluation criteria.  Run test suites via the Inductor CLI or API, and actionably analyze the results via our purpose-built web UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hyperparameters
&lt;/h2&gt;

&lt;p&gt;Massively accelerate your iteration by including hyperparameters in your test suites to automatically test different variants of your app.  Inductor enables you to easily define and inject hyperparameters into your LLM app to test anything - for example, prompts, models, RAG strategies, or anything else.  Inductor then takes care of running every test case against every combination of your hyperparameter values and making the results actionable.  You will very soon also be able to include hyperparameters in your live deployments in order to run and analyze live A/B tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Live production monitoring
&lt;/h2&gt;

&lt;p&gt;When your LLM app goes live, just add a single line of code to enable logging and monitoring your live traffic using Inductor.  Inductor then automatically logs and enables you to richly analyze your live traffic, in order to detect and resolve any issues, analyze live usage in order to improve, and seamlessly feed back into your development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Actionable analytics
&lt;/h2&gt;

&lt;p&gt;Whether you are analyzing the results of a test suite run or examining your live traffic, Inductor makes it easy to rapidly extract actionable insights.  Use Inductor’s web UI to view summary statistics for your LLM app’s performance as well as examine individual LLM app executions (whether within a test suite or in live traffic), including the details of any specific execution and evaluations of its quality.  Use Inductor’s rich filtering capabilities to identify and diagnose areas for improvement.  Easily compare the performance of different versions of your LLM app or different app variants parameterized by hyperparameters in order to systematically improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Easy, secure collaboration
&lt;/h2&gt;

&lt;p&gt;Collaboration is often critical to building and shipping production-ready LLM apps.  For example, it is often essential to get feedback from non-engineering stakeholders (e.g., PM, UX, or subject matter experts), or to collaboratively analyze test suite results or live traffic.  Inductor’s secure sharing and permissioning capabilities enable you to easily share, get feedback, and work together using Inductor – with both technical and non-technical team members.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic versioning and auto-logging
&lt;/h2&gt;

&lt;p&gt;Inductor is powered by a data model and platform that is purpose-built for the workflows required to build and ship production-ready LLM applications.  Inductor automatically tracks and versions all of your work (e.g., including all test cases, quality measures, and hyperparameters in every test suite), and provides a variety of auto-logging capabilities, so that you can iterate at the speed of thought while staying organized by default.  Because Inductor enables you to define your test suites alongside your code (e.g., via YAML or our API), you can also version your test suites within any existing version control system that you use (e.g., via git).&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started quickly
&lt;/h2&gt;

&lt;p&gt;We’ve designed Inductor to make it easy to get started fast.  As soon as you have an account, it takes just a few minutes to create your first test suite and run it on your LLM app – with zero code modifications.  To then also use Inductor to log and monitor your live traffic, you only need to add one line of code to your app.&lt;/p&gt;

&lt;p&gt;Inductor is built to work seamlessly in your existing development environment by using our CLI and Python API – whether you work in an IDE such as VS Code, use a text editor and terminal, or work in a notebook environment such as Jupyter or Google Colab.  Inductor is also designed to work well with your existing tech stack, including any model (whether you are using an LLM API, an open source model, or your own model) and any way of writing LLM apps (from straight-up Python to LangChain and beyond).  The Inductor service is architected from the ground up to be runnable in our cloud account or yours; we offer the option to self-host if desired.&lt;/p&gt;

&lt;p&gt;Finally, we’ve also designed our pricing to make it easy to get started – we offer straightforward, developer-friendly, per-user-per-month pricing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interested in using Inductor?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://inductor.ai/request-access" rel="noopener noreferrer"&gt;Request access here&lt;/a&gt;, and follow us as we release new features and content!&lt;/p&gt;

</description>
      <category>testing</category>
      <category>tooling</category>
      <category>llms</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
