OpenCode is an open-source AI coding agent for the terminal. With Privatemode, every prompt and code snippet is encrypted end-to-end through confidential computing, keeping your codebase fully confidential.
Introduction
OpenCode is an open-source agentic AI coding tool that runs in your terminal. It connects to cloud LLM providers to power code generation, editing, and reasoning across your project. For teams working with proprietary code or regulated applications, sending source code to a cloud provider means losing control of sensitive IP.
Privatemode provides an OpenAI-compatible API backed by state-of-the-art models running inside confidential computing environments. Configure OpenCode with a custom provider pointing to the Privatemode proxy and your source code, prompts, and AI responses are encrypted end-to-end. Not even Privatemode can see your data.
The Privatemode proxy runs locally on your machine and encrypts all data before it leaves. On the server side, inference runs inside hardware-isolated confidential computing environments (AMD SEV / Intel TDX). The proxy verifies server integrity through remote attestation before every session. Your code is never stored and never used for model training.
Benefits
Every prompt you type and every file OpenCode reads is encrypted by the local Privatemode proxy before leaving your machine. Responses are decrypted only on your device. The models run inside confidential computing environments protected by AMD SEV and Intel TDX hardware. Your source code, architecture decisions, and business logic remain confidential throughout the entire session.
OpenCode is open source and connects to Privatemode through the standard OpenAI-compatible API. Configure a custom provider in your opencode.json file and start coding with gpt-oss-120b, a 120-billion-parameter model with 128k context and strong code generation capabilities.
You only need to add the Privatemode proxy as AI provider in your opencode.json. Your existing OpenCode setup gains end-to-end encryption in minutes.
Follow the step-by-step guide in the documentation to configure OpenCode with Privatemode as your inference provider.
FAQ
Privatemode currently offers gpt-oss-120b, a 120-billion-parameter model with 128k context and strong code generation capabilities. It runs inside confidential computing environments with full encryption. Configure it as your model in opencode.json with the provider set to privatemode.