Introduction
As part of the HNG Internship Stage 3 Backend Task, I built an AI-powered task management agent that integrates seamlessly with Telex.im. This wasn't just another CRUD application—it required implementing natural language processing, background job scheduling, and the A2A (Agent-to-Agent) protocol for real-time chat integration.
In this post, I'll walk you through the entire development process, from architecture decisions to deployment challenges, and share the lessons I learned along the way.
Why a Task Tracker Agent?
When presented with several options (code helper, task tracker, or data summarizer), I chose the task tracker for three key reasons:
Practical Value: Every team struggles with task management. Having an AI agent that understands natural language makes task tracking effortless.
Technical Depth: This project allowed me to showcase multiple backend skills—API design, database management, background processing, natural language parsing, and cloud deployment.
User Experience: The difference between typing create task "Deploy to production" due Friday 5pm versus filling out a form is significant. Natural language makes the agent feel intelligent and intuitive.
Tech Stack & Architecture
Core Technologies
ASP.NET Core 8.0: For building a robust, high-performance Web API
Entity Framework Core: ORM for database operations with SQLite
Hangfire: Background job processing for automated reminders
Railway: Cloud deployment platform with Docker support
Project Architecture
I structured the project following clean architecture principles:
TaskTrackerAgent/
├── Controllers/ # API endpoints (A2A integration)
├── Models/ # Domain entities and DTOs
├── Services/ # Business logic layer
├── Data/ # Database context and configurations
└── Program.cs # Application startup and configuration
This separation of concerns makes the codebase maintainable and testable.
Key Features Implemented
- Natural Language Processing The most challenging—and rewarding—part was building the NLP service. Users shouldn't have to remember exact command syntax; the agent should understand them. I implemented regex-based parsing that handles: Date Recognition: csharp// Understands: today, tomorrow, Monday, Friday, Nov 5, December 25 if (lowerMessage.Contains("tomorrow")) { return now.Date.AddDays(1).AddHours(17); // Default to 5 PM }
// Day of week parsing
var daysOfWeek = new[] { "monday", "tuesday", "wednesday", ... };
foreach (var day in daysOfWeek)
{
if (lowerMessage.Contains(day))
{
var targetDay = (DayOfWeek)Array.IndexOf(daysOfWeek, day);
var daysUntil = ((int)targetDay - (int)now.DayOfWeek + 7) % 7;
// ... calculate target date
}
}
Time Parsing:
csharp// Handles: 3pm, 5:30pm, 10am
var timeMatch = Regex.Match(message, @"(\d{1,2})\s*(am|pm)", RegexOptions.IgnoreCase);
if (timeMatch.Success)
{
var hour = int.Parse(timeMatch.Groups[1].Value);
if (timeMatch.Groups[2].Value.ToLower() == "pm" && hour != 12)
hour += 12;
return targetDate.AddHours(hour);
}
Command Recognition:
The NLP service parses user intent and extracts parameters:
csharppublic TaskCommand ParseCommand(string message)
{
var lowerMessage = message.ToLower().Trim();
var command = new TaskCommand();
if (lowerMessage.Contains("create task") ||
lowerMessage.Contains("add task"))
{
command.Type = CommandType.CreateTask;
command.Title = ExtractTaskTitle(message);
command.DueDate = ExtractDueDate(message);
return command;
}
// ... more command types
}
-
A2A Protocol Integration
Telex.im uses the A2A (Agent-to-Agent) protocol for chat integration. I implemented this in the A2AController:
csharp[HttpPost("taskAgent")]
public async Task ProcessMessage([FromBody] A2ARequest request)
{
try
{
var userId = request.User?.Id ?? "anonymous";
var channelId = request.ChannelId;// Parse the natural language command var command = _nlpService.ParseCommand(request.Message); // Execute the command var response = await ExecuteCommand(command, userId, channelId); return Ok(new A2AResponse { Text = response });}
catch (Exception ex)
{
_logger.LogError(ex, "Error processing message");
return Ok(new A2AResponse
{
Text = $"Sorry, I encountered an error. Please try again."
});
}
}
The key insight here: always return 200 OK. Even on errors, you return a friendly message to the user. This prevents breaking the chat experience. Database Design
I used Entity Framework Core with SQLite for simplicity and portability:
csharppublic class TaskItem
{
public int Id { get; set; }
public string Title { get; set; }
public DateTime? DueDate { get; set; }
public bool IsCompleted { get; set; }
public string CreatedBy { get; set; } // User ID from Telex
public string? TelexChannelId { get; set; } // Channel context
}
public class Reminder
{
public int Id { get; set; }
public int TaskId { get; set; }
public DateTime ReminderTime { get; set; }
public bool IsSent { get; set; }
public string UserId { get; set; }
}
The database schema is simple but effective. Each task is tied to a user and optionally to a Telex channel, allowing for both personal and team task management.
- Background Job Processing For reminders, I used Hangfire to run background jobs: csharp// In Program.cs - Configure Hangfire builder.Services.AddHangfire(configuration => configuration .UseSQLiteStorage(connectionString));
builder.Services.AddHangfireServer();
// Schedule recurring job
RecurringJob.AddOrUpdate("check-reminders",
() => reminderService.CheckAndSendReminders(),
Cron.Minutely);
The ReminderService checks every minute for due reminders:
csharppublic async Task CheckAndSendReminders()
{
var pendingReminders = await _taskService.GetPendingRemindersAsync();
foreach (var reminder in pendingReminders)
{
// Send notification back to Telex
_logger.LogInformation(
$"Reminder: {reminder.Task?.Title} for user {reminder.UserId}");
await _taskService.MarkReminderAsSentAsync(reminder.Id);
}
}
Challenges & Solutions
Challenge 1: Natural Language Ambiguity
Problem: Users phrase things differently. "tomorrow at 3" vs "3pm tomorrow" vs "tomorrow 3pm".
Solution: I created multiple regex patterns and prioritized them. The parser tries specific patterns first (like time extraction), then falls back to defaults:
csharp// Extract time if present, otherwise default to 5 PM
var timeMatch = Regex.Match(message, @"(\d{1,2})\s*(am|pm)");
var targetDate = /* calculated date */;
return timeMatch.Success
? targetDate.AddHours(extractedHour)
: targetDate.AddHours(17); // Default 5 PM
Challenge 2: Docker Deployment Configuration
Problem: Railway uses dynamic ports, and ASP.NET Core needs to bind to the correct one.
Solution: Configure the app to read from Railway's PORT environment variable:
dockerfile# In Dockerfile
ENV ASPNETCORE_URLS=http://+:8080
In Program.cs
var port = Environment.GetEnvironmentVariable("PORT") ?? "5000";
app.Run($"http://0.0.0.0:{port}");
Challenge 3: SQLite in Docker
Problem: SQLite database needs write permissions in the container.
Solution: Ensure the database file is created in a writable directory and configure EF Core to create it automatically:
csharpusing (var scope = app.Services.CreateScope())
{
var dbContext = scope.ServiceProvider.GetRequiredService();
dbContext.Database.EnsureCreated();
}
Challenge 4: Error Handling in Chat Context
Problem: Traditional API error responses (404, 500) break the chat experience.
Solution: Always return 200 OK with user-friendly error messages:
csharpcatch (Exception ex)
{
_logger.LogError(ex, "Error processing message");
return Ok(new A2AResponse
{
Text = "Sorry, I encountered an error. Please try again."
});
}
This maintains conversational flow even when things go wrong.
Deployment Process
Step 1: Docker Configuration
I created a multi-stage Dockerfile for optimized builds:
dockerfile# Build stage
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o /app/publish
Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
COPY --from=build /app/publish .
EXPOSE 8080
ENTRYPOINT ["dotnet", "TaskTrackerAgent.dll"]
This reduces the final image size by 60% compared to a single-stage build.
Step 2: Railway Deployment
Railway made deployment surprisingly simple:
Push code to GitHub
Connect repository to Railway
Railway auto-detects Dockerfile
Click deploy
Railway automatically:
Builds the Docker image
Deploys to their infrastructure
Provides a public URL
Sets up SSL/
Top comments (0)