n8n users self-host workflows to keep sensitive data under control. However, when adding AI nodes, that data gets sent to OpenAI or Anthropic. Privatemode lets you use cloud-grade models in n8n while keeping all workflow data encrypted end-to-end.
Introduction
Powerful n8n's AI nodes (AI Agent, Basic LLM Chain, Sentiment Analysis, Summarization Chain) rely on cloud LLM providers like OpenAI. Self-hosted n8n users face a dilemma: use powerful cloud models and lose data control, or stick with limited local models.
Privatemode provides an OpenAI-compatible API backed by state-of-the-art models running inside confidential computing environments. Point n8n's OpenAI Chat Model node at the Privatemode proxy and your workflow data stays encrypted, even during inference.
Your workflow data is encrypted before it leaves your machine, processed inside a hardware-enforced enclave, and never stored or used for training. This is enforced by confidential computing hardware, not just policy.
Benefits
ntegrating Privatemode into n8n brings cloud-AI capabilities to your automation workflows while keeping your data confidential. Every prompt, response, and intermediate result is encrypted through end-to-end encryption and confidential computing. Privatemode never learns from your data.
Since Privatemode adheres to the OpenAI API specification, it works with n8n's OpenAI Chat Model node and every AI node that uses it: AI Agent, Basic LLM Chain, Sentiment Analysis, Summarization Chain, Text Classifier, and Q&A Chain.
Just update the Base URL in your OpenAI Chat Model credentials to point to the Privatemode proxy. No workflow changes, no new nodes, no plugins needed. Existing workflows gain encryption instantly.
How to get started
If you don't have a Privatemode API key yet, you can generate one for free here.
docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your-api-key>The proxy verifies the integrity of the Privatemode service using confidential computing-based remote attestation. The proxy also encrypts all data before sending and decrypts data it receives.

Follow the official n8n tutorial to set up an AI agent. Since Privatemode adheres to the OpenAI interface specification, you can use the standard OpenAI Chat Model node as outlined in the tutorial. However, you’ll need to reconfigure the credentials used in Step 5 of the tutorial.
In addition to the AI Agent node, the OpenAI Chat Model can be used in several other powerful AI nodes, including:

To ensure that all data is sent through the Privatemode proxy set the following credentials for your OpenAI Chat Model node:

Set the parameters of the OpenAI Chat Model node to use the correct credentials and select the model you want to use.
You’ve successfully integrated Confidential AI into your workflow.
FAQ
No structural changes are needed. You reconfigure the OpenAI Chat Model node credentials to point to the Privatemode proxy by changing the Base URL field. All AI nodes that use this model, including AI Agent, Basic LLM Chain, and Summarization Chain, gain end-to-end encryption automatically.
Integration
PrivateGPT
Use Privatemode's encrypted AI API with PrivateGPT to chat with your documents without exposing sensitive files to any cloud provider.
Integration
GPT4All
Access frontier cloud AI from GPT4All's desktop interface while keeping all conversations encrypted end-to-end.


Want to look for yourself?