Tabby is a self-hosted AI coding assistant, but local models can't match cloud performance. Privatemode gives Tabby access to state-of-the-art cloud models while keeping your code fully encrypted.

Introduction
Tabby users self-host to keep code on-premises, but local models can't match the quality of cloud LLMs. Adding a cloud backend means sending your source code to third-party providers, undermining the privacy benefits of self-hosting.
Privatemode bridges the gap. It provides an OpenAI-compatible endpoint backed by state-of-the-art models running inside confidential computing enclaves, so Tabby gets cloud-grade AI without exposing your code.
Your code context is encrypted before it leaves your machine, processed inside a hardware-enforced enclave, and never stored or used for training. This is enforced by confidential computing hardware, not just policy.
Benefits
Integrating Privatemode into Tabby brings cloud-AI capabilities to your self-hosted coding assistant while keeping your code confidential through end-to-end encryption and confidential computing. Privatemode is designed to never learn from your data.
With Privatemode, you can choose from state-of-the-art LLMs to power Tabby's chat and code completion, all running confidentially.
Tabby natively supports OpenAI-compatible HTTP model backends via config.toml. Point it at the Privatemode proxy and both chat and completion features work instantly.
How to get started
If you don't have a Privatemode API key yet, you can generate one for free here.
docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your-api-key
The proxy verifies the integrity of the Privatemode service using confidential computing-based remote attestation. The proxy also encrypts all data before sending and decrypts data it receives.
brew install tabbyml/tabby/tabbyTo install Tabby on macOS using Homebrew, run the command below. This will download and set up the Tabby CLI so you can run and manage the Tabby server.
mkdir -p ~/.tabby
cat > ~/.tabby/config.toml << 'EOF'
[model.chat.http]
kind = "openai/chat"
model_name = "qwen3-coder-30b-a3b"
api_endpoint = "http://localhost:8080/v1"
api_key = "dummy" request_timeout_secs = 120
max_input_tokens = 32768
[model.embedding.http]
kind = "openai/embedding"
model_name = "text-embedding-3-small"
api_endpoint = "http://localhost:8080/v1"
api_key = "dummy"
[server]
host = "0.0.0.0"
port = 8081
completion_timeout = 30
EOFCreate the configuration directory and then generate the config file that tells Tabby how to connect to Privatemode AI.
tabby serve --port 8081To start Tabby with the configuration you just created, run the following command launching the Tabby server.
You've successfully set up Tubby with Privatemode.
FAQ
Minimal changes. Tabby natively supports OpenAI-compatible HTTP model backends. You add a few lines to your config.toml pointing the chat model to the Privatemode proxy, and all requests are automatically encrypted and routed through confidential infrastructure.