Integrating Privatemode AI with n8n

With Privatemode, you can bring state-of-the-art cloud-based AI to your n8n workflows while keeping full control over your data.

Privatemode interfacing with n8n
n8n logoThe concept of Privatemode

Use AI in your workflows without compromising your data privacy

Many users run n8n locally to maintain control over their data. These users are faced with a dilemma when they want to add AI to their workflows: Use the power of cloud-based AI and give up privacy or use limited self-hosted AI and retain privacy.

With Privatemode, users no longer have to choose. Based on confidential computing, Privatemode gives n8n users the strength of cloud-based AI while keeping their data always encrypted and protected—even during processing.

The benefits of using Privatemode AI in n8n

AI nodes with end-to-end encryption

Integrating Privatemode into n8n brings powerful AI capabilities to your workflows while keeping your data fully confidential through end-to-end encryption and confidential computing. Plus, Privatemode is designed to never learn from your data.

State-of-the-art model selection

With Privatemode, you can choose from state-of-the-art LLMs to power your workflows.

Fast & simple integration

You can integrate Privatemode into your workflow in just minutes. The setup is straightforward and fully compatible with all native AI nodes in n8n.

Set up Confidential AI in 5 easy steps

1

Get your API key

If you don’t have a Privatemode API key yet, you can get a free trial key here.

2

Run the Privatemode proxy

docker run -p 8080:8080 \
  ghcr.io/edgelesssys/continuum/continuum-proxy:latest \
  --apiKey <your-api-token>

The proxy verifies the integrity of the Privatemode service using confindential computing-based remote attestation. The proxy also encrypts all data before sending and decrypts data it receives.

3

Set up an AI agent in n8n

Screenshot OpenAI chat model parameters

Follow the official n8n tutorial to set up an AI agent. Since Privatemode AI adheres to the OpenAI interface specification, you can use the standard OpenAI Chat Model node as outlined in the tutorial. However, you'll need to reconfigure the credentials used in Step 5 of the tutorial. In addition to the AI Agent node, the OpenAI Chat Model can be used in several other powerful AI nodes, including:

4

Set the Base URL in the credentials

Adjust the credentials for the OpenAI Chat Model node. This ensures that no data is sent to OpenAI, but only to your Privatemode proxy, which establishes an end-to-end encrypted channel to our confidential AI service.

  1. API Key: Set this to a placeholder value, such as "placeholder". Authentication is handled by the proxy.  
  2. Base URL: Set this to the `/v1` route of your Privatemode proxy.
n8n custom credentials for Privatemode
5

Set the parameters of the node

Set the parameters of the OpenAI Chat Model node to use the correct credentials and select the model you want to use:

Form to select your credentials and model parameters.
6

Done!

You've successfully integrated Confidential AI into your workflow.