I have built many AI agents, and all frameworks felt so bloated, slow, and unpredictable. To change that, I hacked together a minimal library that works with JSON definitions of all steps, allowing you very simple agent flows and reproducibility. It supports concurrency for up to 1000 calls/min which allows you to process entire CSVs or dataframes of tasks.
Install
pip install flashlearn
Perplexity Clone in 10 Lines of Code
Effortlessly clone a perplexity-like feature with a few simple steps.
question = 'When was python launched?'
skill = GeneralSkill.load_skill(ConvertToGoogleQueries, client=OpenAI())
queries = skill.run_tasks_in_parallel(skill.create_tasks([{"query": question}]))["0"]
results = SimpleGoogleSearch(GOOGLE_API_KEY, GOOGLE_CSE_ID).search(queries['google_queries'])
msgs = [
{"role": "system", "content": "insert links from search results in response to quote it"},
{"role": "user", "content": str(results)},
{"role": "user", "content": question},
]
print(client.chat.completions.create(model=MODEL_NAME, messages=msgs).choices[0].message.content)
Learning a New “Skill” from Sample Data
Quickly “learn” a custom skill from your sample data, then apply or store it skill.save('skill.json') for future use.
from flashlearn.skills.learn_skill import LearnSkill
from flashlearn.utils import imdb_reviews_50k
def main():
# Instantiate your pipeline “estimator” or “transformer”
learner = LearnSkill(model_name="gpt-4o-mini", client=OpenAI())
data = imdb_reviews_50k(sample=100)
# Provide instructions and sample data for the new skill
skill = learner.learn_skill(
data,
task=(
'Evaluate likelihood to buy my product and write the reason why (on key "reason")'
'return int 1-100 on key "likely_to_Buy".'
),
)
# Construct tasks for parallel execution (akin to batch prediction)
tasks = skill.create_tasks(data)
results = skill.run_tasks_in_parallel(tasks)
print(results)
Predefined Complex Pipelines in 3 Lines
Use prebuilt “skills” as specialized transformers, applying them instantly to your data.
# You can pass client to load your pipeline component
skill = GeneralSkill.load_skill(EmotionalToneDetection)
tasks = skill.create_tasks([{"text": "Your input text here..."}])
results = skill.run_tasks_in_parallel(tasks)
print(results)
Single-Step Classification Using Prebuilt Skills
Execute classic classification tasks as easily as calling fit_predict
on a ML estimator.
import os
from openai import OpenAI
from flashlearn.skills.classification import ClassificationSkill
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
data = [{"message": "Where is my refund?"}, {"message": "My product was damaged!"}]
skill = ClassificationSkill(
model_name="gpt-4o-mini",
client=OpenAI(),
categories=["billing", "product issue"],
system_prompt="Classify the request."
)
tasks = skill.create_tasks(data)
print(skill.run_tasks_in_parallel(tasks))
Supported LLM Providers
Easily integrate LLM capabilities in your ML pipeline components.
client = OpenAI() # Equivalent to instantiating a pipeline component
deep_seek = OpenAI(api_key='YOUR DEEPSEEK API KEY', base_url="DEEPSEEK BASE URL")
lite_llm = FlashLiteLLMClient() # LiteLLM integration manages keys as environment variables
If you like it, Give us a star Github
Top comments (0)