AI is now widely used for solving various problems, especially in the tech industry and among software developers. This article explores a specific use case of AI, focusing on certain key aspects while keeping the end-user in mind.
In summary, the task organizer is a software tool designed to help users manage tasks over various timeframes. While powerful Project/Task Management solutions like Motion and ClickUp exist, here we’ll demonstrate the capabilities of Still.js by discussing and building a small PoC covering a specific and tiny aspect.
AI Agent vs Agentic AI
According to google AI Overview "AI agents are specialized tools designed for specific, well-defined tasks, while Agentic AI represents a broader concept of autonomous, goal-driven systems that can adapt to changing situations, and coordinate actions with minimal human oversight"
From Generative AI to Generative UI
This concept involves generating UI dynamically based on user prompt. Different prompts produce different UI components. In our case, we’ll handle it using a client-side approach.
How our Agent will it work essentially?
The user will write a text with the tasks he'll do, specifying what, how and when
Content is submitted to the Agent/LLM, which generates task(s)
UI parses LLM response and decides, how/what predefined component to render/present
User can then ask the agent to mark tasks as completed.
Different points of the design systems are addressed in here, for the implementation there is a “hands-on youtube video” where a tiny implementation is built from scratch. Bellow is the design system depicting an overview of the solution.
We’ll consider essentially 3 main parts, the AI provider, a custom Backend API, and the UI which we describe as follow:
AI Provider supplies intelligent capabilities
Backend provides a robust and secure integration with the AI, also serves the Frontend
Frontend handles user input and displays the AI's results.
In this use case, Still.js features like runtime form generation and centralized form validation are leveraged to manage task completion. The structure includes three components:
Home (main component),
TaskDay (group of tasks),
Task (individual tasks). Tasks report back to the Home component as form, enabling them to be marked as completed.
Groq infrastructure is what we'll use for LLM, however other AI providers like Google Gemini, ChatGPT, Copilot, LLaMa, or even offline/on-prem options like Ollama could also be used.
Prompts are sent from the Still.js enabled UI to the AI engine via chat, mainly as text, but it could support voice or audio.
In production, long-term memory for LLMs or agents often requires connecting to external sources like vector databases, highlighting the need for a strong backend. Our agent will have a short-term memory which will be handle as the bellow design:
Bellow is an overview of our agent final result:
Tool use and workflow management are key in Agentic AI, enabling both agency (thinking) and predictability (acting). For some tasks, a robust backend better supports these capabilities. The agent we'll build has moderate predictability.
In the demo, we’re connecting the UI straight to the AI engine for the sake of the size of the hands-on video tutorial, however this is also a valid scenario when using ephemeral API token.
Are you still here? Less talking, more doing, click here and follow the tutorial to build your first AI Agent.
See you there 👊🏽
Top comments (0)