For more than two decades working with infrastructure monitoring systems, I kept encountering the same problem.
Every environment runs multiple tools.
Nagios for infrastructure.
Wazuh for SIEM alerts.
Firewall logs from FortiGate.
IDS alerts from Suricata.
Automation pipelines with n8n.
Each system produces logs and events in its own format.
And connecting them together is always painful.
Teams usually solve this by writing:
- custom scripts
- parsers
- ad-hoc integrations
- fragile pipelines
Over time, these integrations become difficult to maintain.
So I started thinking about a different approach.
What if AI could generate the connectors automatically?
That idea eventually became Orbit-Core.
The Observability Fragmentation Problem
Most modern infrastructure stacks include multiple monitoring tools.
A typical setup might include:
- infrastructure monitoring
- security monitoring
- network telemetry
- application metrics
- automation platforms
The challenge is that each of these tools emits events with completely different schemas.
Example:
Firewall event
{
"srcip": "10.0.0.4",
"dstip": "8.8.8.8",
"action": "deny",
"severity": "high"
}
SIEM alert
{
"agent": "server01",
"rule_id": "5712",
"level": 10,
"description": "Possible brute force"
}
IDS alert
{
"event_type": "alert",
"signature": "ET SCAN Nmap",
"src_ip": "192.168.1.10"
}
Each format is different.
To correlate events, you must first normalize the data.
That is where most integrations fail.
The Idea: A Universal Connector Layer
Instead of building a specific parser for every tool, I designed a system that works differently.
Orbit-Core introduces a Connector Specification (Connector DSL).
A connector describes how a source event maps into a canonical model.
Example:
{
"source": "suricata",
"event_type": "alert",
"mappings": {
"source_ip": "src_ip",
"destination_ip": "dest_ip",
"severity": "alert.severity"
}
}
Once normalized, all events share the same schema.
This allows:
- cross-tool correlation
- unified dashboards
- standardized analytics
But writing connectors manually is still tedious.
So I introduced AI to generate them automatically.
Using AI to Generate Connectors
The key insight was simple:
If you provide a sample JSON payload, an LLM can infer:
- field mappings
- event type
- severity translation
- metadata extraction
So the process becomes:
1️⃣ Paste a sample event
2️⃣ AI analyzes the structure
3️⃣ AI generates the connector spec
Example input:
{
"timestamp": "2026-02-10T12:00:00",
"src_ip": "10.1.1.4",
"dst_ip": "8.8.8.8",
"severity": "medium"
}
AI output:
{
"connector": "custom-firewall",
"event_type": "network",
"mappings": {
"source_ip": "src_ip",
"destination_ip": "dst_ip",
"severity": "severity"
}
}
In seconds, a connector is ready.
No manual integration work.
Architecture of the System
Orbit-Core is built around three main components.
1. Event Ingestion Layer
Responsible for receiving events from:
- APIs
- webhooks
- message queues
- log streams
Events arrive in raw format.
2. Connector Engine
The connector engine transforms raw events using the connector specification.
Steps:
1️⃣ identify source
2️⃣ apply mapping rules
3️⃣ normalize event structure
4️⃣ publish canonical event
This creates a standardized event timeline.
3. AI Connector Generator
The AI generator analyzes example payloads and produces connector specifications automatically.
It can:
- infer field mappings
- classify event types
- normalize severity levels
- generate documentation
The output is a ready-to-use connector.
Why This Approach Is Different
Most observability tools focus on collecting data.
Orbit-Core focuses on understanding it.
Instead of forcing every integration to be built manually, the system lets AI handle the connector generation.
This dramatically reduces integration effort.
Adding a new system can take minutes instead of days.
Real Example
Recently I used the AI generator to create a connector for Suricata IDS.
Process:
1️⃣ captured a JSON alert sample
2️⃣ pasted it into the generator
3️⃣ AI produced the connector spec
Result:
- parsed 400K+ events
- normalized alert severity
- mapped network attributes automatically
No custom parser was required.
Why I Built It as Open Source
I strongly believe observability infrastructure should be open.
Monitoring ecosystems evolve constantly, and new tools appear every year.
By making Orbit-Core open source:
- anyone can create connectors
- integrations can be shared
- the ecosystem grows organically
The goal is to create a universal observability integration layer.
What's Next
The roadmap includes:
- AI-assisted connector validation
- automatic schema discovery
- event correlation engine
- real-time anomaly detection
The vision is simple:
A system where any monitoring tool can connect instantly.
Try It
If you're interested in observability, DevOps, or security monitoring, check out the project:
GitHub
https://github.com/rmfaria/orbit-core
I'm always interested in feedback from the community.
What monitoring systems would you like to see integrated next?
Top comments (0)