DEV Community

Cover image for 17 Projects for Developers to Build AI Features 100x Faster πŸ‘©β€πŸ’»πŸ”₯
Anmol Baranwal Subscriber for Latitude

Posted on • Edited on

17 Projects for Developers to Build AI Features 100x Faster πŸ‘©β€πŸ’»πŸ”₯

Prompt engineering and AI are booming, with almost every startup team integrating AI to make things easier for their users.

Today, we are covering 17 projects to maximize your productivity for developers building with AI.

You will find tools related to prompt engineering, code editors, agents and many more exciting things.

This list will surprise you.


1. Latitude LLM - prompt engineering platform to build and refine prompts with AI.

latitude llm

Β 

Latitude is the open source prompt engineering platform to build, evaluate, and refine your prompts with AI. You can create and iterate prompts in the platform by using their SDKs or the API.

latitude llm

The best part is that every time a prompt runs, it automatically logs the entire context, the output and other metadata relevant for evaluations and debugging.

This is how the dashboard looks.

dashboard

βœ… There is support for advanced features like parameters, snippets, logic and more.

prompt

βœ… You get version control for prompts, collaborative prompt manager and even evaluations in batch or real-time.

prompts

A basic user flow can be:

-β†’ Create a new project.

-β†’ Write your first prompt using the editor.

-β†’ Test your prompt using the playground with different inputs and see the mode's responses.

-β†’ Before deploying, you can upload a dataset and run a batch evaluation to assess your prompt’s performance across various scenarios. Watch this video to see how evaluations can analyze the results of your prompts.

-β†’ You can deploy your prompt as an endpoint for easy integration with your applications.

-β†’ Use the Logs section to review your prompt’s performance over time.

-β†’ Refine your prompt and invite team members to your Latitude workspace to collaborate.

Watch this quick demo to learn more.

You can read the docs and the concepts involved like prompts, logs and evaluation that are involved.

You can use this quickstart guide by using the cloud version or self-hosting it.

They have 536 stars on GitHub and are growing very fast.

Star Latitude LLM ⭐️


2. LiveKit Agents - build real-time multimodal AI apps.

livekit

Β 

LiveKit Agents is an end-to-end framework that enables developers to build intelligent, multimodal voice assistants (AI agents) capable of engaging users through voice, video and data channels.

Let me explain in simple words.

The Agents framework allows you to build AI-driven server programs that can see, hear and speak in real time. Your agent connects with end-user devices through a LiveKit session. During that session, your agent can process text, audio, images or video streaming from a user's device and have an AI model generate any combination of those same modalities as output and stream them back to the user.

βœ… They support a lot of SDKs including Swift, Android, Flutter, Rust, Unity, Node, Go, PHP, React and more.

sdks

feature

You can get started with pip.

pip install livekit-agents
Enter fullscreen mode Exit fullscreen mode

livekit agents

They also have a lot of plugins that make it easy to process streaming input or generate output. For instance, there are plugins for converting text-to-speech or running inference with popular LLMs. One example of a plugin is:

pip install livekit-plugins-openai
Enter fullscreen mode Exit fullscreen mode

They also provide open source React components and examples for building with LiveKit if you're using React.

react livekit

Β 

You can read the docs and see the list of all plugins that are available. If you want to try, then you can do it at cloud.livekit.io.

If you're looking for some sample apps with code, check these:

⚑ A basic voice agent using a pipeline of STT, LLM, and TTS

basic voice agent

⚑ Super fast voice agent using Cerebras hosted Llama 3.1

super fast

Β 

This is one of the most exciting projects out of 1000+ projects that I have ever seen in open source.

They have 3.2k stars on GitHub and are growing strong.

Star LiveKit Agents ⭐️


3. Julep - build stateful AI apps.

julep

Β 

Julep is a platform for creating AI agents that remember past interactions and can perform complex tasks.

Imagine you want to build an AI agent that can do more than just answer simple questions. It needs to handle complex tasks, remember past interactions and maybe even use other tools or APIs. That's exactly where Julep comes in.

You can use it to create multi-step tasks incorporating decision-making, loops, parallel processing and a whole lot more.

βœ… It will automatically retry failed steps, resend messages, and generally keep your tasks running smoothly.

βœ… You can use Julep's document store to build a system for retrieving and using your own data.

mental model of julep

mental model of julep

Β 

To get started, you can use npm or pip.

npm install @julep/sdk 

or

pip install julep
Enter fullscreen mode Exit fullscreen mode

There is also a quick example that I recommend reading where a sample Agent selects a topic, generates 100 related search queries, performs the searches simultaneously, summarizes the results and shares the summary on Discord. With proper code :)

Watch this quick demo and check more examples apps to understand more.

You can read the detailed docs which has a Python quickstart guide, Nodejs quickstart guide, tutorials and how-to guides.

You might say it's similar to Langchain but they both have slightly different concepts. For instance, LangChain is great for creating sequences of prompts and managing interactions with AI models. It has a large ecosystem with lots of pre-built integrations, which makes it easier to run things quickly.

Julep on the other hand is more about building persistent AI agents that can remember things. It shines when you need complex tasks that involve multiple steps, decision-making, and integration with various tools or APIs directly within the agent's process. Read more on the detailed comparison.

Julep has 1.3k stars on GitHub and growing strong.

Star Julep ⭐️


4. Open WebUI - most loved AI Interface (Supports Ollama, OpenAI API...), runs offline.

Open WebUI

Β 

Open WebUI is an awesome user-friendly self-hosted chat user interface designed to operate entirely offline.

This can help you build features at a rate you can never imagine.

Open WebUI

You can use pip to quickly install it. Check the complete installation guide.


# install Open WebUI

pip install open-webui

# run Open WebUI

open-webui serve
Enter fullscreen mode Exit fullscreen mode

open webui

Let's see some of the awesome features.

βœ… You can customize the OpenAI API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter and more.

βœ… You can use it in your preferred language with our internationalization (i18n) support.

βœ… There is an option of hands-free voice and video call features which gives a little more flexibility.

βœ… Their official website has clear info on a bunch of models, prompts, tools and functions by the community.

official website

Β 

βœ… You can load documents directly into the chat or add files to your document library and access them using the # command before a query.

βœ… You can perform web searches using providers like SearXNG, Google PSE, Brave Search, serpstack, serper, Serply, DuckDuckGo, TavilySearch and SearchApi to inject the results directly into your chat experience.

Β 

Also recommend watching this walkthrough to learn more.

You can read the docs which includes a getting started guide, FAQs (recommend reading) and tutorials.

It is built using Svelte, Python and TypeScript.

They have 41.6k stars on GitHub which says a lot about the popularity.

Star Open WebUI ⭐️


5. Quivr - RAG framework for building GenAI second brains.

quivr

Β 

Quivr, your second brain, utilizes the power of GenerativeAI to be your personal assistant. You can think of it as Obsidian but turbocharged with AI powers.

It is a platform that helps you create AI assistants, referred to as Brain. These assistants are designed with specialized cases like some can connect to specific data sources, allowing users to interact directly with the data.

While others serve as specialized tools for particular use cases, powered by Rag technology. These tools process specific inputs to generate practical outputs, such as summaries, translations and more.

Watch a quick demo of Quivr!

quivr gif

Some of the amazing features are:

βœ… You can choose the type of Brain you want to use, based on the data source you wish to interact with.

βœ… They also provide a powerful feature to share your brain with others. This can be done by sharing with individuals via their emails and assigning them specific rights.

sharing brain

βœ… Quivr works offline, so you can access your data anytime, anywhere.

βœ… You can access and continue your past conversations with your brains.

βœ… But the best one that I loved is that you can install a Slack bot. Refer to this demo to see what you can do. Very cool!

Anyway, read about all the awesome stuff that you can do with Quivr.

You can read the installation guide and 60 seconds installation video. Refer to the docs for any other information.

stats

They have also provided guides on how to deploy Quivr with Vercel, Porter, AWS and Digital Ocean.

It has 36.3k+ Stars on GitHub with 300+ releases.

Star Quivr ⭐️


6. Dify - innovation engine for GenAI apps.

dify

Β 

Dify is an open-source platform for building AI applications.

Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

They combine Backend-as-a-Service and LLMOps to improve the development of generative AI solutions. You can use the cloud or self-host it (refer to docs).

You can even build and test powerful AI workflows on a visual canvas.

visual canvas dify

Let's see some of the awesome features:

βœ… Dify provides 50+ built-in tools for AI agents, such as Google Search, DALLΒ·E, Stable Diffusion and WolframAlpha.

βœ… You can monitor and analyze application logs and performance over time.

βœ… You can use the RAG pipeline to extract text from PDFs, PPTs and other common document formats.

βœ… A lot of integration options are available from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found here.

βœ… You can create AI Agents with just a few clicks, letting them independently use enterprise-defined tools and data to solve complex tasks.

ai agents

You can read the docs.

Two of the impressive use cases that I loved:

⚑ Building a Notion AI Assistant

⚑ Create a MidJourney Prompt Bot with Dify

Dify has 47.7k stars on GitHub and has a lot of contributors.

Star Dify ⭐️


7. Micro Agent - AI agent that writes (actually useful) code for you.

micro agent

Β 

AI-assisted coding tools like GitHub Copilot and ChatGPT don't produce very reliable code and they often don't work correctly right out of the box, you find bugs, edge cases, or even references to non-existent APIs.

This can lead to a frustrating loop of trying the generated code, finding issues, going back to the AI for fixes and repeating.

The time spent debugging can negate the total time saved using AI tools in the first place.

Micro Agent uses AI to mitigate the problems of unreliable code generation.

Give it a prompt and it'll generate a test and then iterate on code until all test cases pass.

how it works

You can install it using this command.

npm install -g @builder.io/micro-agent

# Next, set your OpenAI API key when prompted or manually using this.
micro-agent config set OPENAI_KEY=<your token>

# Then you can run to start a new coding task
micro-agent
Enter fullscreen mode Exit fullscreen mode

Micro Agent will prompt you to describe the function you want, generate tests, and start writing code in your preferred language to make the tests pass. Once all the tests are green, you'll have a fully functional, test-backed function ready to use.

Let's explore some of the most mind blowing use cases:

⚑ 30-second demo of Micro Agent generating tests and code for a TypeScript function that groups anagrams together from an array of strings.

group anagram

⚑ Using Micro Agent to generate a simple HTML to AST parser (it was achieved on two iterations).

micro agent html to ast parser

⚑ Unit test matching.

unit matching

⚑ Visual matching (experimental).

Visual matching

⚑ Integration with Figma.

Micro Agent can also integrate with Visual Copilot to connect directly with Figma to ensure the highest fidelity possible design to code!

Visual Copilot connects directly to Figma to assist with pixel-perfect conversion, exact design token mapping, and precise usage of your components in the generated output.

Then, Micro Agent can take the output of Visual Copilot and make final adjustments to the code to ensure it passes TSC, lint, tests, and fully matches your design including final tweaks. Amazing right :)

visual copilot

You can read the docs and the official blog where the team discussed everything about the micro agent.

It's open source with 2.8k stars on GitHub.

Star Micro Agent ⭐️


8. Cline - autonomous coding agent right in your IDE.

cline

Β 

The concept seems very similar to Cursor where Cline is an autonomous coding agent capable of creating/editing files, executing commands, and more with your permission every step of the way.

It's a VSCode extension and you can find it in the marketplace. It has 84k+ installs.

Cline works on Claude 3.5 Sonnet's agentic coding capabilities.

βœ… Cline supports API providers like OpenRouter, Anthropic, OpenAI, Google Gemini, AWS Bedrock, Azure, and GCP Vertex. You can also configure any OpenAI compatible API or use a local model through Ollama.

cline supports models

βœ… You can add context using four different commands.

  • @url: Paste in a URL for the extension to fetch and convert to markdown, useful when you want to give Cline the latest docs

  • @problems: Add workspace errors and warnings ('Problems' panel) for Cline to fix.

cline context

βœ… It uses a headless browser to inspect any website, like localhost, allowing it to capture screenshots and console logs. This gives him the autonomy to fix visual bugs and runtime issues without you needing to handhold and copy-pasting error logs yourself.

headless browser

βœ… You can even run commands in the terminal to do awesome stuff.

You can read the docs.

Cline has 7k stars on GitHub.

Star Cline ⭐️


9. GPT Crawler - create your own custom GPT from a URL.

gpt crawler

Β 

With GPT Crawler, you can crawl any site to generate knowledge files to make your own custom GPT from one or multiple URLs.

gpt crawler

The objective is to make the docs site interactive, people can more simply find the answers they are looking for using a chat interface.

Watch this quick demo!

gif demo

You will have to configure the crawler and then simply run it. After the crawl is complete, you will have a new output.json file, which includes the title, URL and extracted text from all the crawled pages.

You can now upload this directly to ChatGPT by creating a new GPT. Once uploaded, this GPT assistant will have all the information from those docs and be able to answer unlimited questions about them.

custom upload

Β 

It's officially a assistant in ChatGPT.

assistant

You can read the docs on how to get started. You can find all the instructions on the official blog.

If you are wondering how Mitosis compiles those components, then watch this quick tutorial.

They have 18.6k stars on GitHub.

Star GPT Crawler ⭐️


10. Composio - production ready toolset for AI Agents.

composio

Β 

Composio is the only tool needed to build complex AI automation software. It allows AI models to access third-party tools and applications to automate their interactions with them.

For instance, you can connect GitHub with the GPT model via Composio and automate reviewing PRs, resolving issues, writing test cases and more.

You can automate complex real-world workflows by using 90+ tools and integration options such as GitHub, Jira, Slack and Gmail.

integration

You can also automate actions like sending emails, simulating clicks, placing orders and much more just by adding the OpenAPI spec of your apps to Composio.

This is how you can use this.

# install it
pip install composio-core

# Add a GitHub integration
composio add github
Enter fullscreen mode Exit fullscreen mode

Here is how you can use the GitHub integration to star a repository.

from openai import OpenAI
from composio_openai import ComposioToolSet, App

openai_client = OpenAI(api_key="******OPENAIKEY******")

# Initialise the Composio Tool Set
composio_toolset = ComposioToolSet(api_key="**\\*\\***COMPOSIO_API_KEY**\\*\\***")

## Step 4
# Get GitHub tools that are pre-configured
actions = composio_toolset.get_actions(actions=[Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER])

## Step 5
my_task = "Star a repo ComposioHQ/composio on GitHub"

# Create a chat completion request to decide on the action
response = openai_client.chat.completions.create(
model="gpt-4-turbo",
tools=actions, # Passing actions we fetched earlier.
messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": my_task}
  ]
)
Enter fullscreen mode Exit fullscreen mode

You can read the docs and examples.

options

Composio has 9k stars on GitHub.

Star Composio ⭐️


11. Langflow - low-code app builder for RAG and multi-agent AI apps.

langflow

Β 

Langflow is designed to make AI development easier in real-world scenarios. It's Python-based and agnostic to any model, API or database.

It is kind of a dynamic graph where each node is an executable unit. You can watch the demo.

βœ… You can use dynamic inputs by using curly brackets {}.

dynamic inputs

βœ… You can use the full potential of LLMs by easily fine-tuning them from spreadsheets.

fine-tune

βœ… You can go beyond the surface and code your own components.

your own components

βœ… Higher-level components naturally come from AI building blocks. Store and share your creations.

group components

The integration options are huge so you can build almost anything. They also provide composable building blocks which are like pre-built components that can be combined in numerous ways to create AI apps.

integration option

You can read the docs which contains a quickstart guide and playground where you can directly prototype make\ing adjustments and observing different outcomes with models.

Lanflow has 31.2k stars on GitHub.

Star Langflow ⭐️


12. OpenLLM - run LLMs as OpenAI compatible API endpoint in the cloud

open llm

Β 

OpenLLM lets developers run any open-source LLMs as OpenAI-compatible API endpoints with a single command.

⚑ Support llama3, qwen2, gemma and many quantized versions full list.
⚑ OpenAI-compatible API & includes ChatGPT like UI.
⚑ Accelerated LLM decoding with state-of-the-art inference backends.
⚑ Ready for enterprise-grade cloud deployment (Kubernetes, Docker, and BentoCloud).

Get started with the following command.

pip install openllm  # or pip3 install openllm
openllm hello
Enter fullscreen mode Exit fullscreen mode

OpenLLM provides a chat user interface (UI) at the /chat endpoint for an LLM server. You can visit the chat UI at http://localhost:3000/chat and start different conversations with the model.

open llm

As I said before, OpenLLM supports LLM cloud deployment via BentoML, the unified model serving framework and BentoCloud, an AI inference platform for enterprise AI teams.

models

If you don't know, BentoCloud provides a fully managed infrastructure optimized for LLM inference with autoscaling, model orchestration, observability, which is just a fancy way of saying it allows you to run any AI model in the cloud.

bento cloud console

Once the deployment is complete, you can run model inference on the BentoCloud console:

Β 

You can read about the supported models and how to start the LLM server.

Explore docs as you can also chat with a model in the CLI using openllm run and specifying model version - openllm run llama3:8b.

If you love exploring walkthroughs, watch this demo by Matthew!

They have 9.9k stars on GitHub, almost hitting the mark of 10k :)

Star OpenLLM ⭐️


13. GPT Engineer - AI builds what you ask.

gpt engineer

Β 

GPT-engineer lets you specify software in natural language, sit back, and watch as an AI writes and executes the code, and you can ask the AI to implement improvements.

It's safe to say it's an engineer who doesn't need a degree πŸ˜…

It's a commercial project for the automatic generation of web apps. It features a UI for non-technical users connected to a git-controlled codebase.

I know this feels confusing, so watch the below demo to understand how you can use GPT Engineer.

demo gif

You can get started by installing the stable release using this command.

python -m pip install gpt-engineer
Enter fullscreen mode Exit fullscreen mode

By default, gpt-engineer expects text input via a prompt file. It can also accept image inputs for vision-capable models. This can be useful for adding UX or architecture diagrams as additional context for GPT engineer. Read about all the awesome features.

If you want a complete walkthrough, watch this awesome demo by David!

I recommend checking out the roadmap to understand the overall vision.

roadmap

They have 52.2k stars on GitHub and are on the v0.3 release.

Star GPT Engineer ⭐️


14. Void - open source alternative to cursor.

void

Β 

As you know, a lot of projects like PearAI and Zed came out after Cursor launched, but most of these are just forks of VSCode and don’t really improve the ecosystem that much.

If you're looking to switch, I’d recommend Cursor (not open source), Continue (similar to Cursor) and Void which is an open source alternative to Cursor.

I'm covering Void because it's backed by YCombinator so it makes them credible enough. They have not released it but you can easily get the early access to try it out.

It has very similar AI syntax features as Cursor.

βœ… You can do intelligent searches With AI.

βœ… You can view and edit underlying prompts.

βœ… There is contextual awareness, third party integrations, Fine-Tuned generation and you can even host Ollama locally to never run out of API credits again.

cursor

As I said previously, Void is a fork of VS Code so you can transfer over all your themes, keybinds and settings with just one click.

I've covered Continue before so skipped it this time, but I would definitely prefer Cursor until Void is completely released.

Void has 7.5k stars on GitHub and is still in its early release.

Star Void ⭐️


15. Unsloth - Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory.

unsloth

Β 

Unsloth makes finetuning large language models like Llama-3, Mistral, Phi-3, and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy!

βœ… What is finetuning?

If we want a language model to learn a new skill, a new language, some new programming language, or simply want the language model to learn how to follow and answer instructions like how ChatGPT functions, we do finetuning!

Finetuning is the process of updating the actual brains of the language model through some process called back-propagation. But, finetuning can get very slow and very resource intensive.

Β 

Unsloth can be installed locally or through another GPU service like Google Colab. Most use Unsloth through the interface Google Colab which provides a free GPU to train with.

Finetune for Free

Some of the things that stand out:

βœ… Open source trains 5x faster, and the pro version claims to be 30x faster.

βœ… No approximation methods are used resulting in a 0% loss in accuracy.

βœ… No change of hardware, Works on Linux and Windows via WSL.

unsloth model stats

You can read the installation instructions and performance benchmarking tables on the website.

unsloth

You can read the docs and all the uploaded models on Hugging Face directly.

They have also provided a detailed guide on How to Finetune Llama-3 and Export to Ollama.

Unsloth has 16.8k+ stars on GitHub.

Star Unsloth ⭐️


16. Khoj - Your AI second brain.

khoj

Β 

Khoj is the open source, AI copilot for search. Easily get answers without having to sift through online results or your own notes.

Khoj can understand your Word, PDF, org-mode, markdown, plaintext files, GitHub projects and even Notion pages.

type of documents

It's available as a Desktop app, Emacs package, Obsidian plugin, Web app and Whatsapp AI. Obsidian with Khoj might be the most powerful combo!

You can get started with Khoj locally in a few minutes with the following commands.

$ pip install khoj-assistant
$ khoj
Enter fullscreen mode Exit fullscreen mode

Watch it in action!

Some of the exciting features:

βœ… You can share your notes and documents to extend your digital brain.

βœ… Your AI agents have access to the internet, allowing you to incorporate real-time information.

βœ… You'll get a fast, accurate semantic search on top of your docs.

βœ… Your agents can create deeply personal images and understand your speech.

For instance, saying: "Create a picture of my dream house, based on my interests". It will draw this!

image generation

Read all the features including shareable chat, online chat, file summarization, and complete details in various categories.

You can read the docs and you can try Khoj Cloud to try it quickly.

Watch the complete walkthrough on YouTube!

It has 12.8k stars on GitHub and is backed by YCombinator.

Star Khoj ⭐️


17. Prompt Tools - tools for prompt testing.

prompt tools

Β 

This project has a collection of open source self-hostable tools for experimenting with, testing, and evaluating LLMs, vector databases and prompts. The core idea is to enable developers to evaluate using familiar interfaces like code, notebooks and a local playground.

In just a few lines of code, you can test your prompts and parameters across different models (whether you are using OpenAI, Anthropic or LLaMA models). You can even evaluate the retrieval accuracy of vector databases.

prompt tools

Get started with pip.

pip install prompttools
Enter fullscreen mode Exit fullscreen mode

They provide notebook examples which you can run.

You can read the docs amd check their playground.

Prompt tools have 2.7k stars on GitHub.

Star Prompt Tools ⭐️


With the right tools, building AI features is much easier and your team can deliver quality work faster.

Do you think these projects will help teams build AI features at least faster than before?

Have a great day! Till next time.

If you loved this,
please follow me for more :)
profile of Twitter with username Anmol_Codes profile of GitHub with username Anmol-Baranwal profile of LinkedIn with username Anmol-Baranwal

Follow Latitude for more content like this.

Top comments (11)

Collapse
 
ahmadawais profile image
Ahmad Awais ⚑️

Good one. You missed the most powerful serverless platform for building AI products.

⌘ Langbase β€” is a Serverless AI Developers Platform (deploy AI agent pipes with memory and tools)
β†’ Langbase.com

⌘ BaseAI β€” is the first agentic web AI framework (open-source Β· local-first Β· one command prod deployment with Langbase)
β†’ BaseAI.dev

Collapse
 
maxdml profile image
max

Thanks for the post!

Shameless plug: we just released a demo of "durable swarm", taking the recent OpenAI Swarm framework and making it reliable: github.com/dbos-inc/durable-swarm

:)

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

Great work Max! Thank you for sharing.. let me explore it :)

Collapse
 
dumebii profile image
Dumebi Okolo

You make this writing thing so easy!
Forver fan of yours, Anmol!

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

Thanks for checking out Dumebi πŸ™Œ. Always exploring and writing about new open source projects. Your lovely comment totally made my day :)

Collapse
 
dumebii profile image
Dumebi Okolo

You're welcome!

Collapse
 
martinbaun profile image
Martin Baun

Cant wait to really explore this piece later in the day!

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

Definitely! These projects are perfect for building anything related to AI. My favorites are Latitude and Livekit Agents (the examples made me really curious).

Collapse
 
martinbaun profile image
Martin Baun

I'm particularly curious about that too!

Collapse
 
rohan_sharma profile image
Rohan Sharma

I've heard about latitude llm a lot but never used. Need to use it asap!

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

Thanks for reading, Rohan! The Latitude team has recently started building this amazing platform and the initial growth has been great. Highly recommended for prompt engineers.