I’m currently exploring ways to build a fully automated, self-hosted setup where I can describe workflows in natural language and have them created in n8n via MCP (Model Context Protocol). I know there are great Claude integrations out there, but I’d like to avoid Claude and focus on other options – like GPT (ChatGPT Plus), Gemini CLI, or even OpenAI Codex.
My goals:
    • Generate n8n workflows via natural language (e.g. “Every morning at 7, get all new AI tools → save to Airtable → send email → publish blog post”)
    • Keep the stack low-cost or free (max. 20 CHF/month)
    • Have it running 24/7 and accessible from anywhere (not just on local Wi-Fi)
    • Use tools like ChatGPT Plus, Gemini CLI, and the SuperAssistant MCP (already tested)
I’m still unsure about the infrastructure:
    • Oracle Cloud Free Tier, a cheap VPS, or maybe just a dedicated laptop?
    • Has anyone tried this with Hostinger or other shared hosting services?
    • What are the gotchas when running n8n and an MCP server together?
I’m also comparing GitHub projects like:
    • czlonkowski/n8n-mcp
    • salacoste/mcp-n8n-workflow-builder
    • Others that let you generate or update workflows directly via an LLM
If you’ve built something similar, or have strong opinions about the best way to structure such a setup, I’d love to hear your thoughts:
    • What tools are you using?
    • Where do you host it?
    • What’s your favorite combo of LLM + n8n + MCP?
Any advice or shared setups would be incredibly helpful. 🙏

    
Top comments (1)
Thank you TANISHA Singh!