DEV Community

[Comment from a deleted post]
Collapse
 
ujja profile image
ujja

That’s a fair point.
For the demo dataset (~50 people and related lifecycle records), operations are mostly event-driven through MCP, so updates happen incrementally rather than as large bulk writes.
Provisioning happens through the seeder, while ongoing lifecycle actions are handled through MCP-triggered automations. If this were pushed to much larger hiring volumes, batching or a small backend layer would likely sit alongside Notion to handle heavier workloads.

Collapse
 
ujja profile image
ujja

Just to add one more point. The intention isn’t really to use Notion as a high-scale database.
There are already many systems that handle that better. SQL and NoSQL databases are far more suited for large-scale data workloads. EchoHR is more about leveraging Notion for what it does well, i.e. acting as a collaborative operational layer and orchestration surface.
With MCP, it becomes a place where humans, agents, and automations can interact with the same lifecycle state, while heavier processing or integrations can live outside the workspace.
That separation is what makes the model interesting to experiment with.