I've been working with n8n on and off for a while, but only recently took the time to build the kind of workflows that actually solve meaningful problems.
I now have it running in my homelab, integrated with internal systems, OpenAI, and a self-hosted Ollama instance. The experience has been solid across the board.
What stands out about n8n
- Fully open source and easy to self-host. Docker, DB, SSL — all straightforward. No vendor lock-in, no surprise pricing tiers.
- API-first architecture with excellent control flow and webhook handling. You can wire it into anything that speaks HTTP.
- Dedicated nodes for both OpenAI and Ollama. No need for custom HTTP calls when mixing cloud and local AI.
- Well-designed interface that makes iteration fast and reliable. You feel like you're building, not fighting the tool.
What I'm using it for right now
- Enriching internal data by scanning the web for specific brand signals
- Automatically sorting and rewriting technical specifications into human-readable content
- Choosing between cloud and local LLMs based on context and data sensitivity
That last point is what makes the combination particularly well-suited.
Why local + cloud matters
The ability to mix local and cloud-based AI tools within a single workflow — with full control over where data goes and how sensitive it is — is the actual unlock.
If the input is internal customer data, route it to local Ollama. If it's public-facing content where you want GPT-4 quality, route to OpenAI. Same workflow, conditional branching, no manual decision per item.
That's the kind of thing that's awkward with hosted-only platforms, and impossible with most legacy automation tools.
Shoutout
To the n8n team for building one of the most flexible and developer-friendly automation platforms I've worked with.
If you're automating workflows or experimenting with AI tooling, n8n is leading on pretty much every dimension I care about. I'd be interested to hear what others are using it for.