PrivateGPT lets you chat with your own documents using a local RAG pipeline, but the AI model powering it can still send your data to a third-party cloud. Privatemode closes that gap by providing a confidential, encrypted AI backend that PrivateGPT connects to instead.
Introduction
PrivateGPT's document RAG pipeline runs locally, but the AI model answering your questions can still forward your document chunks to a third-party API. For users handling legal contracts, medical records, or internal business data, that is an unacceptable risk.
Privatemode acts as an OpenAI-compatible backend that PrivateGPT connects to. All inference runs inside a confidential computing environment: your document chunks are encrypted before leaving your machine and are never visible to any cloud or service provider.
Local models give you privacy but lag behind in capability. Privatemode gives you access to powerful, state-of-the-art LLMs running under confidential computing, so you no longer have to choose between a capable model and one that keeps your documents private.
Benefits
When PrivateGPT sends a document chunk to Privatemode, it passes through the local Privatemode proxy, which encrypts it before it leaves your machine. The model processes it inside a confidential computing enclave, and Privatemode is designed to never retain or learn from your data.
PrivateGPT supports custom OpenAI-compatible backends through its openailike LLM mode. Switching to Privatemode requires only a settings YAML profile: set api_base to the Privatemode proxy URL, supply your API key, and choose a model.
The Privatemode proxy performs remote attestation at startup, cryptographically verifying that the backend environment of the service is genuine and unmodified before any data is sent. You can verify the AI endpoint, not just trust it.
How to get started
If you don't have a Privatemode API key yet, you can generate one for free here.
docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your-api-key>
The proxy runs locally and handles two things: it uses remote attestation to cryptographically verify the Privatemode enclave is genuine, and it encrypts all data before it leaves your machine. Start it with your API key using Docker or the native binary.
git clone https://github.com/zylon-ai/private-gpt
cd private-gptClone the official PrivateGPT repository from GitHub and change into the project directory.
# macOS:
brew install pyenv
# Windows
Invoke-WebRequest -UseBasicParsing -Uri https://pyenv.run | Invoke-Expression
To install pyenv on macOS, use Homebrew to download and configure the Python version manager. On Windows, run the PowerShell installation command for pyenv-win.
pyenv install 3.11
pyenv local 3.11
Install Python 3.11 using pyenv. PrivateGPT requires exactly Python 3.11; earlier versions are not supported.
# Linux, macOS, Windows(WSL)
curl -sSL https://install.python-poetry.org | python3 -
# Windows (Powershell)
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | py -
Install Poetry to manage PrivateGPT's Python dependencies. The official installer is available at install.python-poetry.org and can be run directly or downloaded and executed locally.
# macOS (Using Homebrew)
brew install make
# Windows (Using Chocolatey)
choco install make
PrivateGPT uses Makefile targets to run setup and launch commands. Install make for your operating system before continuing.
cat > settings-privatemode.yaml <<'YAML'
server
env_name: ${APP_ENV:privatemode}
llm:
mode: openailike
embedding:
mode: openai
ingest_mode: simple
openai:
api_base: http://localhost:8080/v1
api_key: your-api-key
model: openai/gpt-oss-120b
embedding_model: text-embedding-3-small
YAML
Create a settings-privatemode.yaml file in the project root. Set the LLM mode to openailike, point api_base at the Privatemode proxy (http://localhost:8080/v1), add your API key, and choose a model such as gpt-oss-120b. Consult the official PrivateGPT documentation for the full list of supported configuration keys.
poetry install --extras "ui llms-openai-like embeddings-openai vector-stores-qdrant"
Install the PrivateGPT modules needed for an OpenAI-compatible backend: the UI, the openai LLM provider, OpenAI embeddings, and the Qdrant vector store for document indexing.
export OPENAI_API_KEY="your-api-key"
export PGPT_PROFILES="privatemode"
make run
Set the PGPT_PROFILES environment variable to privatemode and your API key, then launch PrivateGPT with make run. PrivateGPT will load your Privatemode settings profile and connect to the local proxy.
Open http://0.0.0.0:8001 in your browser. You can now upload documents, ask questions about them, and get answers from a state-of-the-art model, with every document chunk encrypted end-to-end through Privatemode.
FAQ
No. PrivateGPT supports custom OpenAI-compatible backends through its openailike LLM mode. Switching to Privatemode requires only a new settings YAML profile: point api_base at the Privatemode proxy and set your API key. Your existing document ingestion pipeline, RAG configuration, and UI remain unchanged.
Integration
n8n
Add cloud-grade AI to your n8n workflows while keeping all data encrypted end-to-end through confidential computing.
Integration
GPT4All
Access frontier cloud AI from GPT4All's desktop interface while keeping all conversations encrypted end-to-end.


Want to look for yourself?