Background
In the last months of 2025 I was under great impression of graphs: graph databases, knowledge graphs, graph algorithms, Neo4j, GraphRAGs. I feel like these technologies open another valuable approach of understanding reality through the lens of relations between them. We can visualize relations, train ML models, find patterns and make better predictions, and understand knowledge domains better.
Under such data-science-oriented influence I also have been learning the DevOps side on intense 5-weekend bootcamp, which gave me abilities to set up, run, monitor and move software production deployments.
Idea, project overview
The idea of using knowledge graphs, embeddings to match buyer persona’s treats to product use cases, and matching it all to Google/META ad configs to reach the right people came to me much earlier and during a couple of months, the final idea matured in my head. I started working on Graphmotivo in late November 2025, and devoted much of my free time throughout December and January. By coincidence, we saw a significant increase in the quality of AI programming, which allowed me to create this project in such a short time: 1.5 months.
Graphmotivo is a future marketing intelligence platform that uses Expert AI Agentic workflows and knowledge graphs, which allows marketers and business owners generate buyer personas, use case story journeys, ad targeting inspirations, and explorable identity graphs of simulated buyer personas.
Development Process
My primary tool was Cursor 2.1.49 version with Composer for researching and planning the work, sometimes Auto mode for simpler stuff, and Sonnet 4.5 / Opus 4.5 for hardest parts, and debugging, which often took place for 2-4 hours to find root causes and make fixes. In total, around 500m tokens were used.
The development process was iterative: I started with the design doc, architecture plan, and building every service step by step, starting with the heart: Neo4j graph database + Agentic workflows to fill it in with data, based on user prompts. I tested several agentic workflows frameworks: Google ADK - Agent Development Kit (dropped it because it had a constraint of running up to 10 steps in a workflow), LangGraph (dropped as it was too complex for debugging, too much of important parts what works and how was hidden under abstraction). So I ended up creating a custom Python workflow of agents that: think about persona matching the business description, research of these persona traits, what they do, their problems, what website they visit. Then another set of agents extracts nodes and relations and translates them to a Cypher query according to a specific database schema that I created during a project planning phase. Cypher queries are invoked, graphs are created. Other agents, based on research agents’ data, generate ad targeting configs, user journey story, generate images for the scenario and return data from Neo4j that can be then used in web-based graphs for visualization. These workflows are deployed in a separate container, and are controlled by API.
Once a scenario generation process was working on local docker containers (this was achieved after ~50+ failing 5 minute agentic workflow processes during development) - I’ve finally got json files that could be loaded to the demo UI. Frontend was built in a separate container, for backend I used Supabase, which is an open source Firebase alternative that makes backend setup much faster, as it comes with Postgres database, storage, authorization out-of-the-box. Payment Integration with Paypal, sandbox tests, and when everything worked locally, I went to the next step: Production deployment.
Building a production environment was based on a central plan in a .md file that was preceded with a research & planning phase. The production steps were:
- Pre-deployment preparations (env, code verifications, security hardening)
- VPS, DNS setups, server hardening, firewall, user permissions
- Infrastructure config with Docker compose
- Reverse proxy setup, SSL
- Docker Compose deployment, database migrations, RLS policies, network security hardening
- GCP integrations: Google Oauth authorization, Pub-sub automation for a spend safety cap - Gemini API cost usage to detach the budget account from the project once spend crosses a treshold.
- CI/CD with Github Actions
- Testing, debugging the layout, agentic workflows (another 50+ failed during this phase before getting it working), testing auth, payments sandbox > production
- Grafana + Loki containers for observability
- Ansible playbooks for future reproducibility
Results
In mid-January 2026, everything was working as intended, and the platform was hosted at: https://graphmotivo.dstepanian-tech.ovh/
It offers 3 demo persona purchase story explorations, a flexible token-based payment system allowing users to request their own custom persona + user journey presentation with graph explorations.
There are some improvements that could be done in future i.a. with UX optimization, graph database deduplication (e.g. META vs Meta Ads) but for this stage it’s good enough. It was fun building this, but it required a tight focus and patience to work in Cursor. Sometimes it still feels like working with ultra-fast but clueless temps. These also helped through the process: basic technical understanding of how LLMs work, statistics, AI-augmented programming experience and best practices to guide programming models correctly to the goal, ensure they know what’s needed, and use them to identify root causes of errors together.



Top comments (1)
Really impressive build, especially getting agent workflows + Neo4j working reliably — I know how frustrating those repeated failed runs can be. I faced similar issues when connecting AI workflows with structured data, and debugging the “why” takes most of the time, not the coding itself. One suggestion from my experience: keep very strict schema validation before writing to the graph — it saves a lot of cleanup later. Curious to see how this performs with real user data over time.