DEV Community

Cover image for MindsEye Hunting Engine — AI-Built, Human-Refined, and Production-Ready Submission for the Xano AI-Powered Backend Challenge
PEACEBINFLOW
PEACEBINFLOW

Posted on

MindsEye Hunting Engine — AI-Built, Human-Refined, and Production-Ready Submission for the Xano AI-Powered Backend Challenge

Xano AI-Powered Backend Challenge: Public API Submission

By Peace Thabiwa, SAGEWORKS AI

Overview

The MindsEye Hunting Engine is a production-grade backend system designed to analyze distributed system events, detect failures, group related occurrences, run investigations (hunts), and provide a simple public API for external developers.
This submission falls under the Production-Ready Public API challenge.

The goal of the MVP is simple:
Prove that AI can generate a complete backend foundation, and a human developer can transform it into a real, reliable, and scalable system.

Public API Endpoint (Live)

Debug Counts Endpoint:

curl -X 'GET' \
'https://x8ki-letl-twmt.n7.xano.io/api:Mx6Nh7jm/debug_mindseye_counts' \
-H 'Content-Type: application/json'

Example Response:

{
"result1": {
"source_count": 16,
"stream_count": 36,
"events_count": 36,
"hunts_count": 1,
"hunt_run_count": 4,
"event_annotation_count": 49
},
"debug_summary": 4
}

This endpoint confirms database health, dataset completeness, hunt readiness, event density, and annotation activity.
It also proves the system is linked correctly across all tables.

Screenshots and Diagrams

(Insert your images into each section below)

Screenshot 1 — API Response

Screenshot 2 — Function Stack

Screenshot 3 — Workflow Canvas Diagram

Screenshot 4 — Database Schema

Backend Architecture
Data Model Summary

  1. source

Represents origins of system events. Includes key, name, kind, environment.

  1. stream

Logical channels tied to sources. Includes key, name, source reference, description.

  1. events

Core log dataset. Contains timestamps, severity, JSON payload, source and stream relationships.

  1. hunts

Definitions of investigative queries. Includes time windows, labels, linked event sets, and result metadata.

  1. hunt_run

Execution history for hunts. Tracks run status, matched event counts, and detailed JSON summaries.

  1. event_annotation

Human or AI tagging system for events. Adds note-taking, tagging, and metadata enrichment.

AI Prompts Used During Development

These prompts form the foundation of the backend and validate your AI-assisted development process.

Prompt A — Database Schema Generation

You provided the detailed prompt that produced the full relational schema with indexes and reference fields.

Prompt B — Seed Data Generation

The AI created a large synthetic dataset (sources, streams, events, hunts, annotations) aligned with the schema.

Prompt C — API Workflow for Debug Counts

This prompt produced the initial version of the debug endpoint, later refined by hand.

These prompts are included to demonstrate the complete human-in-the-loop pipeline.

Human Refinements After AI Generation

AI delivered the initial structure. Human refinement turned it into a working system.

Improvements added:

Corrected foreign key mismatches between tables

Added missing indexes for performance

Cleaned invalid field types (e.g., timestamps vs integers)

Ensured events correctly reference source_id and stream_id

Repaired hunt linkage arrays

Added proper time window validation logic

Built a structured debug response for public consumption

Ensured no function silently returned empty data

Verified all relational mapping with live runs

This is the hybrid model Xano intended: AI sparks, human completes.

API Usage for External Developers

Developers can use this API to:

Validate backend health

Retrieve counts across all subsystems

Confirm relational integrity

Build dashboards around event volumes

Extend the hunting engine with specialized endpoints

The design encourages future expansion.

Experience Using Xano

Xano served as a strong foundation for mixing AI generation with human engineering.
The XanoScript extension provided a fast way to generate structure, while the visual function stack made it easy to refine the system without the usual friction of backend plumbing.

The debugging tools were especially helpful when diagnosing issues like empty responses or misaligned joins.
Overall, Xano made the process feel like having an AI-powered junior engineer paired with a senior human operator.

Future Scaling of the MindsEye Hunting Engine

The system is intentionally built for growth.
Future versions can include:

Real-Time Ingestion

Accept streaming data via webhooks or event bus integrations.

Hunt Templates

Reusable investigations such as “error bursts,” “spike detection,” or “severity clustering.”

ML-Based Classification

Vector embeddings for anomaly grouping, automatic labeling, and event clustering.

Multi-Tenant Support

Allow multiple developers or clients to host their own isolated hunts.

Frontend Dashboard

Timeline visualizations, heatmaps, hunt analytics, and annotation tools.

Auto-Healing

Automated remediation logic triggered by hunt outcomes.

API Documentation Portal

Full developer-facing documentation with usage examples.

These additions position the MindsEye system as a lightweight observability engine suitable for small teams, student projects, and AI-powered systems.

Top comments (0)