GPT4All is built around a simple promise: your conversations stay on your device. Privatemode extends that promise to the cloud, letting you access frontier models without sending your data to any provider.

Introduction
GPT4All users choose local AI because they don't want data leaving their device. But local models are constrained by your hardware's memory and compute. Privatemode lets you access state-of-the-art cloud models while maintaining the same data control GPT4All users expect.
Every prompt and response is encrypted end-to-end before leaving your machine. The Privatemode proxy handles encryption and remote attestation locally, so the AI provider never sees your conversations in plaintext.
GPT4All supports remote model providers natively. Add Privatemode as a custom OpenAI-compatible endpoint in settings, and your familiar chat interface works exactly the same, now powered by frontier cloud models with full encryption.
Benefits
Integrating Privatemode into GPT4All brings cloud-AI capabilities to your desktop chat while keeping your conversations confidential. Every message is encrypted through end-to-end encryption and confidential computing. Privatemode never learns from your data.
Local models are capped by your device's memory and GPU. With Privatemode, you can access state-of-the-art LLMs that far exceed what local hardware can run, while keeping the same privacy guarantees.
Run the Privatemode proxy via Docker, add it as a remote provider in GPT4All's settings, and start chatting. Your local models stay available alongside Privatemode for offline use.
How to get started
If you don't have a Privatemode API key yet, you can generate one for free here.
docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your-api-key>
The proxy verifies the integrity of the Privatemode service using confidential computing-based remote attestation. The proxy also encrypts all data before sending and decrypts data it receives.
Download GPT4All from the official website: https://www.nomic.ai/gpt4all and open the application on your computer.
FAQ
Minimal changes. GPT4All supports remote OpenAI-compatible providers. You add Privatemode as a remote model endpoint in the settings, and the chat interface works as usual. Your local models remain available for offline use alongside Privatemode.
Integration
n8n
Add cloud-grade AI to your n8n workflows while keeping all data encrypted end-to-end through confidential computing.
Integration
PrivateGPT
Use Privatemode's encrypted AI API with PrivateGPT to chat with your documents without exposing sensitive files to any cloud provider.

Want to look for yourself?