<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Samir</title>
    <description>The latest articles on DEV Community by Samir (@smirfolio).</description>
    <link>https://dev.to/smirfolio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/smirfolio"/>
    <language>en</language>
    <item>
      <title>My Cerebras Hackathon Journey: How TasksForge Helped Me Build an AI App for Kids in 8 Hours</title>
      <dc:creator>Samir</dc:creator>
      <pubDate>Wed, 04 Jun 2025 04:53:25 +0000</pubDate>
      <link>https://dev.to/smirfolio/my-cerebras-hackathon-journey-how-tasksforge-helped-me-build-an-ai-app-for-kids-in-8-hours-586m</link>
      <guid>https://dev.to/smirfolio/my-cerebras-hackathon-journey-how-tasksforge-helped-me-build-an-ai-app-for-kids-in-8-hours-586m</guid>
      <description>&lt;h2&gt;
  
  
  Context:
&lt;/h2&gt;

&lt;p&gt;A couple of weeks ago, I took part in an exciting 8-hour hackathon organized by Cerebras, spotlighting their powerful new LLAMA 4 LLM deployment. The event hosted on Lu.ma challenged developers to build innovative AI-powered apps fast.&lt;/p&gt;

&lt;p&gt;Spoiler alert: I didn’t win. But the project that did was seriously impressive a Satellite Signal Log Analyzer that parsed satellite radio logs in real time, with interference risk scores, visual trend charts, log comparisons, and more, all powered by Llama 4 and a sleek orange/white UI. Totally deserved the win.&lt;/p&gt;

&lt;p&gt;As for me, I’ve been a programmer analyst for over 15 years, building web apps and CRM systems across a wide range of tech stacks Python, Next.js, PHP, Java, you name it. I’ve tackled everything from full-stack dev to backend architecture. But even with that experience, hackathons are a different kind of beast. Fast pace. High pressure. No time to overthink.&lt;/p&gt;

&lt;p&gt;My idea was to develop an AI-powered summarizing blog app for kids, a fun, educational platform that takes regular blog content and simplifies it for younger readers using LLMs. I knew I wanted something lightweight, easy to deploy, and most importantly, user-friendly for kids.&lt;/p&gt;

&lt;p&gt;But the clock was ticking, and I wasn’t sure where to start.&lt;/p&gt;

&lt;p&gt;That’s when I reached for my secret weapon: &lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt;. It helped me break the project into manageable chunks, map out the architecture, and stay focused throughout the sprint. Combine that with a little “vibe coding” magic, and I was off and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ How I Did It: From “Petit Prince” to Project Plan with TasksForge
&lt;/h2&gt;

&lt;p&gt;Once I locked in my idea: an AI-powered blog summarizing app for kids. I knew I needed to quickly shape it into something concrete. I even had the perfect name in mind: “Petit Prince”. Inspired by the timeless tale, I wanted the app to be equally poetic, thoughtful, and child-friendly, something that both parents and kids could use together.&lt;/p&gt;

&lt;p&gt;I had a rough concept: parents would paste a blog link or article into the app, select their child’s age range, and let the AI assistant rewrite the content in a simplified, kid-friendly tone. The core of the app would be powered by Llama 4, through Cerebras’s inference API.&lt;/p&gt;

&lt;p&gt;I also envisioned a gentle, intuitive UI accessible enough for a child, but practical for a parent. With only hours to build, I needed clarity. So I turned to my secret productivity engine: &lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt;, using this prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a parent, I want to share a blog post or article with my child, using the app AI assistant to summarize and rewrite the blog/article to be read by a kid. As a parent, I want to have an input text box or paste a blog link, and radio options to choose the child's age range. I want to use Llama 4 LLM provided by the Cerebras inference platform.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt; helped me unpack the idea, generate structured user stories, and tailor the implementation to fit my preferred tech stack. After a few iterations and a back-and-forth conversation with the &lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt; AI assistant tool, I was able to refine the core requirements and user stories, aligning them perfectly with the vision in my head. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0kwrsw9ycvpkgs6qoqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0kwrsw9ycvpkgs6qoqo.png" alt="Image description" width="800" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also made sure to scope the project to the tech stack I’m most comfortable with, focusing on quick deployment and simplicity. I had a clear architecture, scoped features, and a realistic build plan all within minutes. You can even explore my exported project board here:&lt;br&gt;
👉 &lt;a href="https://github.com/users/smirfolio/projects/143/views/1?layout=board" rel="noopener noreferrer"&gt;Petit Prince Hackathon Project Board&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  💡 Choosing a Platform: Why I Went with &lt;a href="https://www.val.town/" rel="noopener noreferrer"&gt;val.town&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;For deployment, I wanted something fast, free, and frictionless. That’s when I chose &lt;a href="https://www.val.town/" rel="noopener noreferrer"&gt;val.town&lt;/a&gt;, a lightweight, serverless environment for deploying JavaScript/TypeScript apps. I was curious to try it, and it turned out to be a pleasant surprise. It gave me quick, no-hassle deployment and even came with a helpful &lt;a href="https://www.val.town/x/valdottown/Townie/code/prompts/system_prompt.txt" rel="noopener noreferrer"&gt;system prompt&lt;/a&gt; for "Vibe coding",  so I keep focused hackathon sprint.&lt;/p&gt;

&lt;h3&gt;
  
  
  🤖 AI-Augmented Workflow: TasksForge + Prompt Libraries + Aider
&lt;/h3&gt;

&lt;p&gt;To move efficiently from planning to implementation, I followed a structured, AI-assisted development workflow inspired by this video &lt;a href="https://www.youtube.com/watch?v=XMSGka9jUzk" rel="noopener noreferrer"&gt;AI Coding Tools: Prompt Libraries &amp;amp; Workflow Strategies for Maximum Impact!&lt;/a&gt;. The approach emphasizes using prompt libraries to consistently guide coding agents like Aider and Cody through a repeatable, smart dev loop.&lt;/p&gt;

&lt;p&gt;Here’s how I applied it:&lt;/p&gt;

&lt;p&gt;I used TasksForge.ai to generate clear, actionable user stories and requirements.&lt;/p&gt;

&lt;p&gt;Then, using a structured prompt library, I passed those user stories into &lt;a href="https://aider.chat/" rel="noopener noreferrer"&gt;Aider chat&lt;/a&gt;, my AI coding assistant.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aider.chat/" rel="noopener noreferrer"&gt;Aider chat&lt;/a&gt;, the best "Vibe Coding" tool ever, helped incrementally build features handling UI generation, backend integration, and logic while I focused on high-level design and creative problem solving, accelerating development without losing quality.&lt;/p&gt;

&lt;p&gt;By combining TasksForge.ai’s structured ideation with a prompt-driven AI coding loop, I was able to go from vision to working prototype in record time.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 The Final Result: Petit Prince App
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbgwieu8v5adi9hwebzf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbgwieu8v5adi9hwebzf.png" alt="Image description" width="800" height="1157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can try the live version of my app here: 👉 &lt;a href="https://petitprince.val.run" rel="noopener noreferrer"&gt;petitprince.val.run&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://petitprince.val.run" rel="noopener noreferrer"&gt;Petit Prince&lt;/a&gt; is a thoughtful, security-aware, and mobile-friendly AI assistant designed for parents who want to share web content with their children in a safe, clear, and engaging way.&lt;/p&gt;

&lt;h3&gt;
  
  
  🌟 Key Features:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;✅ Safety &amp;amp; Security First: The app doesn’t just summarize, it also filters for child-appropriate content. This ensures that inappropriate or overly complex language never reaches your child.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🧠 AI-Powered Summarization &amp;amp; Simplification: Parents can paste in an article URL or a block of text, then choose the target age range. The app uses Llama 4 via Cerebras API to rewrite the content in a clear, simplified tone suitable for kids.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📱 Mobile-Friendly by Design: The UI was designed to work smoothly on phones and tablets. It’s perfect for on-the-go learning moments, bedtime stories, or curious questions while commuting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📤 Easy to Share: The app can be easily shared with other parents or family members through Facebook Messenger, allowing for collaborative parenting and meaningful family interactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🎯 Distraction-Free Content: One of the core ideas was to strip out casual web distractions, no ads, no cluttered layouts, no unrelated links. Just clean, focused content rewritten for kids to read, explore, and learn with curiosity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Want to Learn More?&lt;br&gt;
Check out the &lt;a href="https://petitprince.val.run/documentation" rel="noopener noreferrer"&gt;Petit Prince Documentation&lt;/a&gt; to see how the app works, the technologies involved, and the API logic behind it.&lt;/p&gt;

&lt;p&gt;See some already &lt;a href="https://petitprince.val.run/blogs" rel="noopener noreferrer"&gt;generated blogs here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4w3jwj7oyb7locsec1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4w3jwj7oyb7locsec1l.png" alt="Image description" width="799" height="956"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🎉 Conclusion
&lt;/h2&gt;

&lt;p&gt;I didn’t win the hackathon, but honestly? I had an amazing time building &lt;a href="https://petitprince.val.run" rel="noopener noreferrer"&gt;Petit Prince&lt;/a&gt;, and I’m proud of what I was able to deliver in just eight hours.&lt;/p&gt;

&lt;p&gt;Thanks to a powerful combo of:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt; (for scoping, planning, and rapid architecture), and the prompt library workflow featured in &lt;a href="https://www.youtube.com/watch?v=XMSGka9jUzk" rel="noopener noreferrer"&gt;this video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I submitted a fully functional project one hour before the deadline, complete with a live deployment, documentation, and a delightful concept I’m genuinely excited to keep evolving.&lt;/p&gt;

&lt;p&gt;Whether you win or not, hackathons are about creative momentum, community, and pushing your limits. And this one definitely delivered.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Monitoring your platform app with UV and tasksMonitorTool: A Lightweight &amp; Secure Approach</title>
      <dc:creator>Samir</dc:creator>
      <pubDate>Sun, 25 May 2025 22:43:35 +0000</pubDate>
      <link>https://dev.to/smirfolio/monitoring-your-ai-assistant-with-uv-and-tasksmonitortool-a-lightweight-secure-approach-for-23c5</link>
      <guid>https://dev.to/smirfolio/monitoring-your-ai-assistant-with-uv-and-tasksmonitortool-a-lightweight-secure-approach-for-23c5</guid>
      <description>&lt;p&gt;In this post, I want to share a minimal and efficient method I implemented to monitor the health and system resources of the &lt;a href="http://tasksforge.ai/" rel="noopener noreferrer"&gt;tasksforge.ai&lt;/a&gt; SaaS platform using the simple script tool: &lt;a href="https://github.com/smirfolio/tasksMonitorTool" rel="noopener noreferrer"&gt;tasksMonitorTool&lt;/a&gt;, powered by the ultra-fast UV Python runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 Context: Managing the tasksforge.ai Platform
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://tasksforge.ai/" rel="noopener noreferrer"&gt;tasksforge.ai&lt;/a&gt; is structured as a modular, secure SaaS platform where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The backend (BE) is isolated inside a private internal network, only accessible by the Next.js frontend (FE) that handles user interactions.&lt;/li&gt;
&lt;li&gt;A separate SaaS management layer is under development to manage:

&lt;ul&gt;
&lt;li&gt;Subscriptions&lt;/li&gt;
&lt;li&gt;User permissions&lt;/li&gt;
&lt;li&gt;Server health monitoring (DB, website, APIs, system resources)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  ❌ Option 1: Internal /api/health Endpoint with psutil
&lt;/h3&gt;

&lt;p&gt;One approach is to integrate psutil directly into the backend and expose a /api/health endpoint.&lt;/p&gt;

&lt;p&gt;But this presents several drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduces extra load on the backend.&lt;/li&gt;
&lt;li&gt;Violates the isolation principle (monitoring is now tied to app runtime).&lt;/li&gt;
&lt;li&gt;Not scalable for short interval queries (e.g., every 10s).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✅ Option 2: External Monitor via UV Script and SSH
&lt;/h3&gt;

&lt;p&gt;To decouple monitoring from the backend and maintain performance, I implemented a separate, isolated monitoring script that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is written in Python&lt;/li&gt;
&lt;li&gt;Runs using UV for speed and isolation&lt;/li&gt;
&lt;li&gt;Is executed remotely via SSH from my local machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps the backend clean and the resource monitoring fully externalized.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔒 Security Concerns: SSH &amp;amp; Internal Network Design
&lt;/h2&gt;

&lt;p&gt;Security was central to this approach. Here's how I designed it:&lt;/p&gt;

&lt;h3&gt;
  
  
  🧱 Network Isolation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The backend server is inside a private internal network, with no direct internet exposure.&lt;/li&gt;
&lt;li&gt;Only the frontend (Next.js) has access to the backend APIs; it acts as a gateway.&lt;/li&gt;
&lt;li&gt;The SaaS management platform is also isolated and lives in a separate internal network.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔐 SSH Access Model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Access to monitoring is performed via SSH using public/private key authentication only.&lt;/li&gt;
&lt;li&gt;No passwords. No open ports to the world.&lt;/li&gt;
&lt;li&gt;The monitor script can only be invoked through a secure SSH tunnel:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; ~/.ssh/forge-monitor user@internal-ip &lt;span class="s1"&gt;'uv run  ~/monitor/monitor.py --json'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🔄 SSH Tunnel (if remote access is required)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To manage from the internet, I occasionally use SSH tunneling:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; ~/.ssh/forge-monitor &lt;span class="nt"&gt;-L&lt;/span&gt; 9000:localhost:9000 user@gateway-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This maintains full encryption, access control, and keeps internal services hidden from the public internet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🧠 The golden rule: internal services are never directly exposed — they are reachable only through private, authenticated tunnels.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  ⚙️ How the Monitor Works
&lt;/h2&gt;

&lt;p&gt;The tool checks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU, memory, and disk usage&lt;/li&gt;
&lt;li&gt;Database connection health&lt;/li&gt;
&lt;li&gt;Backend/website availability via HTTP requests or sockets
Run It (Locally or via SSH):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh user@server &lt;span class="s1"&gt;'uv run ~/monitor/monitor.py --json'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"healthy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"cpu_usage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;11.6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"memory_total"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;33325473792&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"memory_available"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10426048512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"memory_used"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20279836672&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"memory_percent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;68.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"disk_total"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;980799373312&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"disk_used"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;339543687168&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"disk_free"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;591358361600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"disk_percent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;36.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"disk_read_bytes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;21832974848&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"disk_write_bytes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;491035527168&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or use a custom alias:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;checkforge&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ssh user@server '~/monitor/run_monitor.sh'"&lt;/span&gt;
checkforge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🖼️ Visual Output
&lt;/h2&gt;

&lt;p&gt;From the command line&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5w1jhs7ggyqc95l23qe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5w1jhs7ggyqc95l23qe.png" alt=" " width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Integration into a saas platform management app&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzdrtpxaerwrl670m3q6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzdrtpxaerwrl670m3q6.png" alt=" " width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🧩 Conclusion
&lt;/h2&gt;

&lt;p&gt;This approach ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ No performance hit to the AI backend&lt;/li&gt;
&lt;li&gt;✅ Full security via SSH tunneling and network isolation&lt;/li&gt;
&lt;li&gt;✅ Ease of access for DevOps without needing public endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out the tool here:&lt;br&gt;
&lt;a href="https://github.com/smirfolio/tasksMonitorTool" rel="noopener noreferrer"&gt;🔗 GitHub – tasksMonitorTool&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ll continue sharing how I scale and secure the &lt;a href="http://tasksforge.ai/" rel="noopener noreferrer"&gt;tasksforge.ai&lt;/a&gt; platform — stay tuned!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Breaking Change: Making `tasks_prompts_chain` Agnostic and More Powerful</title>
      <dc:creator>Samir</dc:creator>
      <pubDate>Mon, 07 Apr 2025 04:25:01 +0000</pubDate>
      <link>https://dev.to/smirfolio/breaking-change-making-taskspromptschain-agnostic-and-more-powerful-mjd</link>
      <guid>https://dev.to/smirfolio/breaking-change-making-taskspromptschain-agnostic-and-more-powerful-mjd</guid>
      <description>&lt;p&gt;Welcome, developers and AI enthusiasts! Today, I'm excited to share a major update to the &lt;a href="https://github.com/smirfolio/tasks_prompts_chain/releases/tag/0.1.0" rel="noopener noreferrer"&gt;tasks_prompts_chain&lt;/a&gt; library. This update introduces a breaking change that not only makes the library completely agnostic regarding the LLM SDK you use but also enhances how responses are handled, giving you the flexibility to choose between streaming and batch processing.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why this update is essential:&lt;/strong&gt; Achieving SDK-agnosticism and flexible response handling.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How to initialize your chain with multiple LLM configurations.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How to define prompts with explicit LLM selection.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How to stream responses in real time or wait for complete outputs to access each prompt's result individually.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Preparing for future integrations with &lt;a href="https://tasksforge.ai/" rel="noopener noreferrer"&gt;Tasksforge.ai&lt;/a&gt;.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's dive in!&lt;/p&gt;




&lt;h2&gt;
  
  
  Why the Update? SDK-Agnosticism and Flexible Response Handling
&lt;/h2&gt;

&lt;p&gt;When I first built &lt;code&gt;tasks_prompts_chain&lt;/code&gt;, it was designed to depend solely on &lt;code&gt;openai.AsyncOpenAI&lt;/code&gt;. However, as more projects adopted the library, it became clear that a one-size-fits-all approach wasn’t enough. Projects often have varying needs and may prefer different LLM SDKs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SDK-Agnosticism&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
With this update, &lt;code&gt;tasks_prompts_chain&lt;/code&gt; is now agnostic—supporting not only &lt;code&gt;AsyncOpenAI&lt;/code&gt; but also &lt;code&gt;AsyncCerebras&lt;/code&gt; and &lt;code&gt;AsyncAnthropic&lt;/code&gt;. This means you’re no longer locked into a single LLM SDK. You pass your chosen LLM SDK to &lt;code&gt;TasksPromptsChain()&lt;/code&gt; during initialization, giving you the freedom to select the best tool for your project. And don’t worry—more SDKs will be supported in the future!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexible Response Handling&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
In addition to SDK flexibility, the update introduces a versatile response-handling mechanism. You can now choose between streaming responses in real time or waiting until the entire chain has finished processing, after which you can access each prompt's output individually. This flexibility lets you inspect and debug outputs at your own pace.&lt;/p&gt;


&lt;h3&gt;
  
  
  Step 1: Initializing the Chain with LLM Configurations
&lt;/h3&gt;

&lt;p&gt;The first step is to set up your chain by passing a list of LLM configurations. This centralizes the SDK setup, ensuring your project isn’t forced to stick with one implementation.&lt;/p&gt;

&lt;p&gt;Below is an example configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tasks_prompts_chain&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TasksPromptsChain&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tasks_prompts_chain.llm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AsyncOpenAI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AsyncAnthropic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AsyncCerebras&lt;/span&gt;

&lt;span class="n"&gt;llm_configs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Unique identifier for this LLM
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_class&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;AsyncOpenAI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Your chosen LLM SDK class
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_options&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-openai-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4120&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Unique identifier for this LLM
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_class&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;AsyncAnthropic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Your chosen LLM SDK class
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_options&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude-3-sonnet-20240229&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-anthropic-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;8192&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Unique identifier for this LLM
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_class&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;AsyncCerebras&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Your chosen LLM SDK class
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_options&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama-3.3-70b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-cerebras-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;base_url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.cerebras.ai/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4120&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="c1"&gt;# Initialize your chain with the LLM configurations
&lt;/span&gt;&lt;span class="n"&gt;chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TasksPromptsChain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm_configs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm_configs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my system Prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;final_result_placeholder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_analysis&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;system_apply_to_all_prompts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Defining Prompts with Explicit LLM Selection
&lt;/h3&gt;

&lt;p&gt;Once your chain is initialized, the next step is to define your prompts. Each prompt can now specify which LLM to use by including an llm_id attribute. This explicit routing lets you control the execution flow of your prompt chain.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;prompts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;generate_idea&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Give me a cool project idea about {{ topic }}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MARKDOWN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_placeholder&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elaboration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# Use the 'claude' LLM for this prompt
&lt;/span&gt;    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;generate_description&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write a detailed README for: {{ elaboration }}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MARKDOWN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_placeholder&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;doc_readme&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# Use the 'gpt' LLM for this task
&lt;/span&gt;    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By including the llm_id in each prompt, you gain granular control over which language model processes each task&lt;/p&gt;




&lt;h2&gt;
  
  
  Flexible Response Handling: Streaming vs. Batch Processing
&lt;/h2&gt;

&lt;p&gt;One of the most exciting improvements is how the library handles responses. You now have two modes available:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Streaming Response
&lt;/h3&gt;

&lt;p&gt;If you prefer real-time feedback, you can stream responses as they come in. For example, printing each response chunk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_chain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompts&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Batch Response Processing
&lt;/h3&gt;

&lt;p&gt;Alternatively, you might prefer to wait until all responses are received. When the chain emits the special marker Done, you can then access each prompt’s output individually. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_chain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompts&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Optionally, print interim chunks:
&lt;/span&gt;    &lt;span class="c1"&gt;# print(response)
&lt;/span&gt;
    &lt;span class="c1"&gt;# When the final output is ready:
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;tasks-sys&amp;gt;Done&amp;lt;/tasks-sys&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Final Results:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;placeholder&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elaboration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;doc_readme&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;]:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;placeholder&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_result&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;placeholder&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach offers you full control: either enjoy real-time outputs or wait for complete responses to process further.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing for the Future: Integration with Tasksforge.ai
&lt;/h2&gt;

&lt;p&gt;These improvements in tasks_prompts_chain are just the beginning. With the library now SDK-agnostic and featuring flexible response handling, it’s perfectly poised for deeper integration with Tasksforge.ai. Imagine a platform where you can effortlessly swap LLMs, inspect individual step outputs, and build robust AI workflows—all while maintaining complete freedom over your chosen SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;eady to harness the power of a more flexible and powerful prompt chain? Follow these steps:&lt;/p&gt;

&lt;p&gt;Update your initialization: Pass your preferred LLM SDKs via the llm_configs.&lt;/p&gt;

&lt;p&gt;Define your prompts: Use the llm_id attribute to route tasks to your chosen LLM.&lt;/p&gt;

&lt;p&gt;Choose your response mode: Stream in real time or process outputs in batch—whichever suits your workflow.&lt;/p&gt;

&lt;p&gt;For more details and advanced examples, check out the updated documentation:&lt;br&gt;
👉 &lt;a href="https://github.com/smirfolio/tasks_prompts_chain/blob/main/README.md" rel="noopener noreferrer"&gt;tasks_prompts_chain&lt;/a&gt; on GitHub&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Breaking changes can be a bit daunting, but they’re a vital step in evolving our tools to be more flexible and powerful. With this update, tasks_prompts_chain is no longer tied to a specific LLM SDK and offers you a choice in response handling. Whether you're debugging a complex prompt chain or building scalable AI workflows, these changes are designed to enhance your productivity.&lt;/p&gt;

&lt;p&gt;I’m excited to see how you integrate these improvements into your projects—and I can’t wait for you to experience the full potential when integrated with Tasksforge.ai. Your feedback is invaluable, so please share your experiences, ideas, or any challenges you encounter.&lt;/p&gt;

&lt;p&gt;Happy coding,&lt;br&gt;
Samir&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tasksforge</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>I created a simple, minimalist Prompt Chain library</title>
      <dc:creator>Samir</dc:creator>
      <pubDate>Thu, 13 Feb 2025 02:52:43 +0000</pubDate>
      <link>https://dev.to/smirfolio/i-created-a-simple-minimalist-prompt-chain-library-for-tasksforge-le4</link>
      <guid>https://dev.to/smirfolio/i-created-a-simple-minimalist-prompt-chain-library-for-tasksforge-le4</guid>
      <description>&lt;h2&gt;
  
  
  Solving a Real Challenge in TasksForge.ai
&lt;/h2&gt;

&lt;p&gt;During the development of &lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt;, I encountered a key challenge: managing complex interactions between prompts that generate and elaborate projects and their tasks. I needed a seamless way to chain prompts together, ensuring that each output could intelligently guide the next step in the workflow. After exploring available solutions, I realized that existing libraries were either too complex, too rigid, or lacked the flexibility I needed.&lt;/p&gt;

&lt;p&gt;So, I built my own solution: &lt;a href="https://pypi.org/project/tasks-prompts-chain/" rel="noopener noreferrer"&gt;tasks_prompts_chain&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is &lt;a href="https://pypi.org/project/tasks-prompts-chain/" rel="noopener noreferrer"&gt;tasks_prompts_chain&lt;/a&gt;?
&lt;/h2&gt;

&lt;p&gt;tasks_prompts_chain is a minimalist Python library designed for efficient and flexible LLM prompt chaining. Whether you’re working with AI-assisted task automation, chatbots, content generation, or any multi-step AI workflow, this library simplifies the process of structuring sequential prompts in a clean and reusable way.&lt;/p&gt;

&lt;p&gt;💡 &lt;/p&gt;

&lt;h2&gt;
  
  
  Core Features:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Simple API&lt;/strong&gt; Define and chain prompts effortlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access each step prompt output&lt;/strong&gt; easily access each output result prompt to either save to DB or just debug your prompting steps workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic Inputs&lt;/strong&gt; Pass variables dynamically between prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lightweight &amp;amp; Efficient&lt;/strong&gt; No unnecessary dependencies, keeping it lean and fast.&lt;/p&gt;

&lt;p&gt;You can check out the latest version of the library here: tasks_prompts_chain on PyPI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open Source?
&lt;/h2&gt;

&lt;p&gt;I strongly believe in the power of open-source collaboration. By making &lt;a href="https://github.com/smirfolio/tasks_prompts_chain" rel="noopener noreferrer"&gt;tasks_prompts_chain&lt;/a&gt; publicly available, I hope to contribute to the AI development community while also gaining valuable feedback from other developers who face similar challenges. Open-source projects thrive on shared knowledge, and I’m excited to see how others might build upon and improve this tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Glimpse into TasksForge.ai
&lt;/h2&gt;

&lt;p&gt;This project is more than just a library; it’s part of a bigger vision. TasksForge.ai is a SaaS platform designed to streamline AI-assisted project management and task generation. By leveraging AI-driven workflows and automation, it empowers developers, entrepreneurs, and teams to work smarter and faster.&lt;/p&gt;

&lt;p&gt;🛠 &lt;strong&gt;With &lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt;, you can&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Project Insights: To truly understand a project, you need the right insights. This platform helps by providing customizable options to highlight the key features of your project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focus on specific aspects, such as performance, scalability, or user experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Offer actionable recommendations tailored to your needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Export to your favorite project management app: Seamlessly export your elaborated projects to your GitHub Projects account or Atlassian account. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The generated tasks can help you kickstart your idea with confidence and integrate easily into your existing workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tasks_prompts_chain&lt;/strong&gt; is just one of the many building blocks that power this ecosystem, helping ensure smooth, logical, and structured interactions between AI-driven workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;🚀 &lt;strong&gt;A stable release is now available on the official Python Package Index (PyPI)&lt;/strong&gt;. I’m actively working on making the library more LLM SDK-agnostic—while it currently works seamlessly with OpenAI’s SDK, future updates will enhance its compatibility with other providers. If you’re interested in AI-driven task automation or prompt chaining, I’d love for you to try it out and share your feedback!&lt;/p&gt;

&lt;h2&gt;
  
  
  How You Can Get Involved:
&lt;/h2&gt;

&lt;p&gt;✅ Try out tasks_prompts_chain and contribute to its development.&lt;/p&gt;

&lt;p&gt;✅ Test and use &lt;a href="https://www.tasksforge.ai/" rel="noopener noreferrer"&gt;TasksForge.ai&lt;/a&gt; and let me know if the project's generation meets your expectations.&lt;/p&gt;

&lt;p&gt;✅ Share your thoughts—let’s build something great together!&lt;/p&gt;

&lt;p&gt;Stay tuned for more updates, and let’s shape the future of AI-powered workflows! 🚀&lt;/p&gt;

</description>
      <category>sideprojects</category>
      <category>ai</category>
      <category>promptengineering</category>
      <category>tasksforge</category>
    </item>
    <item>
      <title>Simplifying Project Management with AI-Powered Task Generation</title>
      <dc:creator>Samir</dc:creator>
      <pubDate>Sun, 22 Dec 2024 23:44:09 +0000</pubDate>
      <link>https://dev.to/smirfolio/simplifying-project-management-with-ai-powered-task-generation-2l6l</link>
      <guid>https://dev.to/smirfolio/simplifying-project-management-with-ai-powered-task-generation-2l6l</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yllpjh62h4ssbn5iy99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yllpjh62h4ssbn5iy99.png" alt="TasksForge logo" width="800" height="112"&gt;&lt;/a&gt;&lt;br&gt;
Managing projects in the fast-paced world of software development can be challenging. As a developer, I understand the need for tools that help streamline workflows and make planning more manageable. That’s why I created this platform – a simple yet effective solution built with FastAPI and Next.js, using AI to make project management easier and more efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is This Platform?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.tasksforge.ai" rel="noopener noreferrer"&gt;TasksForge&lt;/a&gt; platform is designed to help bridge the gap between a rough idea and a detailed project plan. TasksForge leverages the power of &lt;a href="https://inference-docs.cerebras.ai/introduction" rel="noopener noreferrer"&gt;Cerebras AI&lt;/a&gt; inference platform to deliver cutting-edge capabilities to our users. By integrating &lt;a href="https://inference-docs.cerebras.ai/introduction" rel="noopener noreferrer"&gt;Cerebras&lt;/a&gt; light-speed AI inference, the platform utilizes Llama 3.1 70B to bootstrap the management of your IT project. By leveraging advanced AI models, it takes brief project descriptions and turns them into actionable task lists. Whether planning software design, mapping out web app features, or tackling complex projects, this tool is here to lend a hand. &lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of the Platform
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;AI-Powered Task Generation: One of the platform’s main features is its ability to create detailed task lists from short project descriptions. With it, you can:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Quickly generate a to-do list for your software design process.&lt;/p&gt;

&lt;p&gt;Plan out features for your next web application.&lt;/p&gt;

&lt;p&gt;Save time and focus on what matters most, while the AI handles the initial planning.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Project Insights: To truly understand a project, you need the right insights. This platform helps by providing customizable options to:&lt;/li&gt;
&lt;li&gt;Highlight the key features of your project.&lt;/li&gt;
&lt;li&gt;Focus on specific aspects, such as performance, scalability, or user experience.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Offer actionable recommendations tailored to your needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Export to GitHub Projects: Seamlessly export your elaborated projects to your GitHub Projects account. The generated tasks can help you kickstart your idea with confidence and integrate easily into your existing workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why I Chose FastAPI and Next.js
&lt;/h2&gt;

&lt;p&gt;For the backend, I opted for FastAPI due to its speed and ability to handle APIs efficiently. On the frontend, Next.js was a natural choice for its interactive and dynamic user interface capabilities. Together, these frameworks provide the stability and performance needed to create a reliable application.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.tasksforge.ai" rel="noopener noreferrer"&gt;TasksForge&lt;/a&gt; platform is built with a focus on reliability and efficiency, leveraging cutting-edge technologies to deliver an exceptional user experience:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend with Next.js&lt;/strong&gt;: Handles user interaction and provides a dynamic, responsive interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend with FastAPI&lt;/strong&gt;: Powers the API endpoints, ensuring fast and secure data processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostgreSQL Database&lt;/strong&gt;: Stores generated projects and tasks in a structured and easily accessible format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration with Cerebras AI&lt;/strong&gt;: Sends API calls to &lt;a href="https://inference-docs.cerebras.ai/introduction" rel="noopener noreferrer"&gt;Cerebras&lt;/a&gt; to query the Llama 3.1 70B model, enabling precise and powerful task generation, the used prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub GraphQL API&lt;/strong&gt;: Facilitates seamless export of projects and tasks to your GitHub Projects account, allowing for effortless integration into your development workflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Input&lt;/em&gt;: Describe your project in a few words. For example, “A platform with real-time chat, analytics, and user authentication.”&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Processing&lt;/em&gt;: The AI analyzes your input and generates detailed insights and task lists.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Output&lt;/em&gt;: You receive a clear roadmap, including milestones, task dependencies, and estimated timeframes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Export&lt;/em&gt;: Export the project and its tasks directly to your GitHub Projects account for seamless integration into your development workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration with Cerebras AI: Task Generation with Lama-3.3-70B
&lt;/h2&gt;

&lt;p&gt;Our platform, &lt;a href="https://www.tasksforge.ai" rel="noopener noreferrer"&gt;TasksForge&lt;/a&gt;,  integrates seamlessly with &lt;a href="https://inference-docs.cerebras.ai/introduction" rel="noopener noreferrer"&gt;Cerebras AI&lt;/a&gt;, leveraging its powerful hardware to query the Llama 3.1 70B model via API calls. This integration enables highly precise and powerful task generation, helping users refine their project ideas with the assistance of an advanced language model.&lt;/p&gt;

&lt;p&gt;To ensure users can only utilize the platform for generating project insights, we have implemented a structured approach to prompt design. The process begins with a system prompt that securely controls and limits the scope of queries to project-related tasks. This system prompts acts as a safeguard, preventing the model from being used for unintended purposes, and ensures the platform is used solely for generating actionable insights.&lt;/p&gt;

&lt;p&gt;The user's input is integrated into the prompt as part of a user-specific project description entry. Once the project description is provided, a user prompt is generated to specify the desired template for the output. The model will then generate responses following a predefined structure that suits the user's project needs. This template prompting technique not only enhances the relevance of the output but also ensures that responses are aligned with the intended project scope.&lt;/p&gt;

&lt;p&gt;The combination of these techniques the secure system prompt, user project description, and template-driven output, results in a dynamic yet controlled interaction with the LLM, enabling precise, predictable format, and actionable insights for every project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits for Project Managers, Developers, and Freelancers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For Project Managers&lt;/strong&gt;: Understand project scope, organize tasks, and communicate plans effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Developers&lt;/strong&gt;: Start coding sooner with a clear plan, reducing the chances of miscommunication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Freelancers&lt;/strong&gt;: Present well-documented and organized projects to clients, ensuring clear communication and professional delivery of project plans.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Tool to Support Your Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.tasksforge.ai" rel="noopener noreferrer"&gt;TasksForge&lt;/a&gt; platform isn’t trying to replace your expertise; it’s here to support you. Whether you’re working on a personal project or collaborating with a team, it’s designed to make things simpler and more efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roadmap
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1: Advanced Prompt Engineering for Task Generation&lt;/strong&gt;: Improve prompt sophistication to ensure tasks generated align with structured software development methodologies. Integrate UML software design principles, particularly focusing on sequence diagrams, use cases, and user stories, to visually represent task dependencies and processes, this way we will have well-generated documentation at the biggin of the project.&lt;br&gt;
&lt;strong&gt;2: Integration with Project Management Platforms&lt;/strong&gt;: Enhance the platform by integrating with popular project management tools, beginning with Altisan. Future integrations with other tools such as  Trello, or Asana will also be considered to offer flexibility for various project teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interested in Trying It Out?
&lt;/h2&gt;

&lt;p&gt;If you’re curious to see how this platform can fit into your workflow, give it a try at &lt;a href="https://www.tasksforge.ai" rel="noopener noreferrer"&gt;TasksForge&lt;/a&gt;. I’d love to hear your feedback and suggestions for improvement. Together, we can continue refining this tool to make it even more helpful. &lt;/p&gt;

</description>
      <category>sideprojects</category>
      <category>ai</category>
      <category>projectmanagement</category>
    </item>
  </channel>
</rss>
