DEV Community

Cover image for talkd.ai got accepted into the Github Accelerator! (also our first official release)
Vinicius Mesel
Vinicius Mesel

Posted on

4

talkd.ai got accepted into the Github Accelerator! (also our first official release)

Hey there!

If you still don’t know us, we are talkd.ai, an open-source organization that is maintaining Dialog, a project focused on letting you easily deploy any LLM that you want (currently any of those available in the Langchain and partner libraries - we will cover more on that later).

Today, we are very grateful for announcing two things:

  1. This is our first official public release, our "Numero Uno" and a starting point to this adventure that will be long and fun.

  2. Now it's official: We got accepted in the GitHub Accelerator 2024 AI Cohort - a Cohort full of amazing people that started on April 22nd.

What is the GitHub Accelerator?

Github Accelerator 2024 - AI Cohort

The GitHub accelerator is an accelerator that is running its second cohort in a slightly different approach from the usual: it focuses on making open-source sustainable and helping maintainers find sustainable ways to fund their work full-time on the projects.

During the accelerator, you will be connected to references from fields such as AI, InfoSec, Successful Open Source maintainers, investment funds, and many other professionals who will guide you through the many possibilities of open-source funding.

The project also provides you a stipend for 10 weeks, allowing you and your team to be full-time focused on the development of software and communications of your project, credits on OpenAI and Microsoft Azure.

Back to the project's history: How did the project start?

@vmesel and @avelinorun

The project started to help us (Thiago Avelino and I) create chat experiences that resembled human behavior in answering frequently asked questions inside our contexts (Avelino inside Buser and my context of wanting to learn more about LLM deployments and maintenance).

Still, as the project grew, our contexts changed a lot, as the necessity for different techniques, retrievers, optimizations, and plugins

Nowadays, the project allows you to deploy any model that respects Langchain LCELs or our libraries' chain model, you can choose which one to maintain and set up from there.

The process of getting to where we are right now involves lots of people, but I would like to specially:

  • Luan Fernandes - our Langchain specialist and long-time contributor
  • Walison Filipe - our FastAPI master and testing guru
  • Gregg and Kevin from the GitHub Accelerator - on helping us improve our software through communications, mentorship, invaluable resources and connections
  • Andreas, Alicia, Namee, and Jurgen - you have amazing projects and invaluable lessons on improving pitching.

What can I do with talkd.ai/dialog?

talkd.ai/dialog lets you deploy any LLM that you want (with the code already adapted to the LCEL or AbstractLLM models) in 5 minutes.

With a simple docker-compose up you have sample data and a sample prompt up and running.

If you want to customize, you can use our library: dialog-lib, to implement custom RAGs and Retrievers. We are fully integrated with SQLAlchemy, PGVector, Anthropic, and OpenAI.

Here is our quick product demo to showcase you how simple is to deploy our software:

Here is the official release link from GitHub.

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read more

Top comments (0)

Postmark Image

Speedy emails, satisfied customers

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up