Confidential AI workflows with Privatemode and n8n

n8n users self-host workflows to keep sensitive data under control. However, when adding AI nodes, that data gets sent to OpenAI or Anthropic. Privatemode lets you use cloud-grade models in n8n while keeping all workflow data encrypted end-to-end.

n8n logo

Introduction

Add cloud-based AI to n8n workflows without exposing your data

Screenshot of n8n with Privatemode

AI-powered workflows need cloud models

Powerful n8n's AI nodes (AI Agent, Basic LLM Chain, Sentiment Analysis, Summarization Chain) rely on cloud LLM providers like OpenAI. Self-hosted n8n users face a dilemma: use powerful cloud models and lose data control, or stick with limited local models.

Privatemode bridges the gap

Privatemode provides an OpenAI-compatible API backed by state-of-the-art models running inside confidential computing environments. Point n8n's OpenAI Chat Model node at the Privatemode proxy and your workflow data stays encrypted, even during inference.

Encrypted at every step

Your workflow data is encrypted before it leaves your machine, processed inside a hardware-enforced enclave, and never stored or used for training. This is enforced by confidential computing hardware, not just policy.

Benefits

Why use Privatemode AI with n8n?

End-to-end encrypted AI workflows

ntegrating Privatemode into n8n brings cloud-AI capabilities to your automation workflows while keeping your data confidential. Every prompt, response, and intermediate result is encrypted through end-to-end encryption and confidential computing. Privatemode never learns from your data.

Works with all n8n AI nodes

Since Privatemode adheres to the OpenAI API specification, it works with n8n's OpenAI Chat Model node and every AI node that uses it: AI Agent, Basic LLM Chain, Sentiment Analysis, Summarization Chain, Text Classifier, and Q&A Chain.

Simple credential change

Just update the Base URL in your OpenAI Chat Model credentials to point to the Privatemode proxy. No workflow changes, no new nodes, no plugins needed. Existing workflows gain encryption instantly.

How to get started

Set up confidential agents in n8n in just a few steps

Step illustration

Get your API key

If you don't have a Privatemode API key yet, you can generate one for free here.

docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest  --apiKey <your-api-key>

Run the Privatemode proxy

The proxy verifies the integrity of the Privatemode service using confidential computing-based remote attestation. The proxy also encrypts all data before sending and decrypts data it receives.

Screenshot of OpenAI chat model configaration.

Set up an AI agent in n8n

Follow the official n8n tutorial to set up an AI agent. Since Privatemode adheres to the OpenAI interface specification, you can use the standard OpenAI Chat Model node as outlined in the tutorial. However, you’ll need to reconfigure the credentials used in Step 5 of the tutorial.

In addition to the AI Agent node, the OpenAI Chat Model can be used in several other powerful AI nodes, including:

Credentials window in n8n

Adjust credentials

To ensure that all data is sent through the Privatemode proxy set the following credentials for your OpenAI Chat Model node:

  • API Key: Set this to a placeholder value, such as ‘placeholder’. Authentication is handled by the proxy.
  • Base URL: Set this to the /v1 route of your Privatemode proxy, for example, "http://localhost:8080/v1".
Screenshot of OpenAI chat model configaration.

Set the parameters of the node

Set the parameters of the OpenAI Chat Model node to use the correct credentials and select the model you want to use.

Step illustration

Done!

You’ve successfully integrated Confidential AI into your workflow.

FAQ

Frequently asked questions about using Privatemode with n8n

No structural changes are needed. You reconfigure the OpenAI Chat Model node credentials to point to the Privatemode proxy by changing the Base URL field. All AI nodes that use this model, including AI Agent, Basic LLM Chain, and Summarization Chain, gain end-to-end encryption automatically.

Integrations

View more

Explore other Privatemode integrations

Integration

PrivateGPT

Use Privatemode's encrypted AI API with PrivateGPT to chat with your documents without exposing sensitive files to any cloud provider.

Read guide
PrivateGPT logo

Integration

GPT4All

Access frontier cloud AI from GPT4All's desktop interface while keeping all conversations encrypted end-to-end.

Read guide
logo of Nomic, provider of GPT4All

Talk to an expert

Have a questions about Privatemode? Let's talk!

Want to look for yourself?