Skip to main content
ChatCLI is extensively configurable through environment variables. Create a .env file in the project root or in your HOME directory.

Priority Order

1

Command-line flags

E.g.: --provider, --model (highest priority)
2

System Environment Variables

export LLM_PROVIDER=OPENAI
3

Variables in the .env file

LLM_PROVIDER=OPENAI
4

Default Values

ChatCLI internal defaults (lowest priority)

General Configuration

VariableDescriptionDefault
ENVSets the environment. If dev, logs are shown in the terminal and saved to the app log; if prod, only to the log. Valid values: dev, proddev
LLM_PROVIDERSets the default AI provider to use. Valid values: OPENAI, CLAUDEAI, GOOGLEAI, XAI, OLLAMA, STACKSPOT, COPILOT."OPENAI"
CHATCLI_LANGSets the interface language. Values: pt-BR, en-US. If not set, it will attempt to detect the system language.en-US
LOG_LEVELLog level. Options: debug, info, warn, error."info"
LOG_FILEPath to the log file. Default: $HOME/.chatcli/app.log"$HOME/.chatcli/app.log"
LOG_MAX_SIZEMaximum log file size before rotation. Accepts 100MB, 50KB, etc."100MB"
HISTORY_MAX_SIZEMaximum history file (.chatcli_history) size before rotation."100MB"
HISTORY_FILECustom path for the history file (supports ~; by default it creates the history where chatcli was executed).".chatcli_history"
CHATCLI_DOTENVCustom path for your .env file.".env"
CHATCLI_IGNOREPath to ignore file (e.g., .chatignore). When set, it takes priority over project/global ignore.""
CHATCLI_CODER_UI/coder mode UI style: full (default) or minimal."full"
CHATCLI_CODER_BANNERDisplay the /coder quick cheat sheet when entering the session (true/false)."true"

OAuth Authentication

In addition to traditional API keys, ChatCLI supports OAuth authentication for OpenAI, Anthropic, and GitHub Copilot. With OAuth, you can use your existing plan (ChatGPT Plus, Codex, Claude Pro, GitHub Copilot) without generating API keys.
VariableDescriptionDefault
CHATCLI_AUTH_DIRDirectory where OAuth credentials are stored.~/.chatcli/
CHATCLI_OPENAI_CLIENT_IDAllows overriding the OpenAI OAuth client ID.(internal)
CHATCLI_COPILOT_CLIENT_IDAllows overriding the GitHub Copilot OAuth client ID.(internal)
Credentials are stored with AES-256-GCM encryption in ~/.chatcli/auth-profiles.json. The encryption key is automatically generated and saved in ~/.chatcli/.auth-key (permission 0600).
Use /auth login openai-codex, /auth login anthropic, or /auth login github-copilot in interactive mode to start the OAuth flow. See the full OAuth documentation for more details.

Provider Configuration

OpenAI

VariableDescriptionRequired?
OPENAI_API_KEYYour secret OpenAI API key. Alternative: use /auth login openai-codex for OAuth.Yes*
OPENAI_MODELThe model to use. E.g.: gpt-4o, gpt-4o-mini, gpt-4-turbo.No
OPENAI_ASSISTANT_MODELThe model to use specifically for the Assistants API.No
OPENAI_USE_RESPONSESSet to true to use the v1/responses API instead of v1/chat/completions.No
OPENAI_MAX_TOKENSSets the maximum tokens to use in the session (depends on model)No

Anthropic (Claude)

VariableDescriptionRequired?
ANTHROPIC_API_KEYYour secret Anthropic API key. Alternative: use /auth login anthropic for OAuth.Yes*
ANTHROPIC_MODELThe model to use. E.g.: claude-sonnet-4-5, claude-opus-4-6, claude-sonnet-4.No
ANTHROPIC_API_VERSIONThe Anthropic API version to use in headers.No
ANTHROPIC_MAX_TOKENSSets the maximum tokens to use in the session (depends on model)No

Google (Gemini)

VariableDescriptionRequired?
GOOGLEAI_API_KEYYour Google AI Studio API key.Yes
GOOGLEAI_MODELThe model to use. E.g.: gemini-2.5-pro, gemini-2.5-flash.No
GOOGLEAI_MAX_TOKENSSets the maximum tokens to use in the session (depends on model)No

xAI (Grok)

VariableDescriptionRequired?
XAI_API_KEYYour secret xAI API key.Yes
XAI_MODELThe model to use. E.g.: grok-4-fast, grok-3.No
XAI_MAX_TOKENSSets the maximum tokens to use in the session (depends on model)No

Ollama (Local)

VariableDescriptionRequired?
OLLAMA_ENABLEDSet to true to enable the Ollama provider.Yes
OLLAMA_BASE_URLBase URL of your local Ollama server.No
OLLAMA_MODELThe name of the local model to use (e.g., llama3, codellama).No
OLLAMA_FILTER_THINKINGFilters intermediate reasoning in responses (e.g., for Qwen3, llama3 default true…).No
OLLAMA_MAX_TOKENSSets the maximum tokens for the Ollama provider.No

StackSpot

VariableDescriptionRequired?
CLIENT_IDStackSpot client ID credential.Yes
CLIENT_KEYStackSpot client key credential.Yes
STACKSPOT_REALMYour organization’s realm (tenant) on StackSpot.Yes
STACKSPOT_AGENT_IDThe ID of the specific agent to use.Yes

GitHub Copilot

VariableDescriptionRequired?
GITHUB_COPILOT_TOKENGitHub Copilot OAuth token. Alternative: use /auth login github-copilot for Device Flow.Yes*
COPILOT_MODELThe model to use. E.g.: gpt-4o, claude-sonnet-4, gemini-2.0-flash.No
COPILOT_MAX_TOKENSSets the maximum tokens for the response.No
COPILOT_API_BASE_URLCopilot API base URL (for enterprise environments).No
* For OpenAI, Anthropic, and GitHub Copilot, the API key is required only if you are not using OAuth authentication (/auth login). Both methods can coexist.

Agent Mode Configuration

VariableDescription
CHATCLI_AGENT_ALLOW_SUDOSet to "true" to allow the agent to suggest and execute commands with sudo. Use with extreme caution.
CHATCLI_AGENT_DENYLISTList of regex patterns (separated by ;) to block additional commands in agent mode.
CHATCLI_AGENT_CMD_TIMEOUTTimeout for a single command execution by the agent (default: 10m, maximum: 1h).
CHATCLI_AGENT_PLUGIN_MAX_TURNSMaximum agent turn limit in /agent//coder mode (default: 50, maximum: 200).
CHATCLI_AGENT_PLUGIN_TIMEOUTTotal agent plugin timeout (default: 15m).

Multi-Agent (Parallel Orchestration)

VariableDescriptionDefault
CHATCLI_AGENT_PARALLEL_MODEEnables multi-agent mode with parallel orchestration. The orchestrator LLM dispatches specialist agents in parallel.false
CHATCLI_AGENT_MAX_WORKERSMaximum number of workers (goroutines) executing agents simultaneously.4
CHATCLI_AGENT_WORKER_MAX_TURNSMaximum turns in each worker agent’s mini ReAct loop.10
CHATCLI_AGENT_WORKER_TIMEOUTTimeout per individual worker agent. Accepts Go durations (e.g., 30s, 2m, 10m).5m
For complete details on the multi-agent system, see the Multi-Agent Orchestration documentation.

Server Mode Configuration (chatcli server)

VariableDescriptionDefault
CHATCLI_SERVER_PORTgRPC server port.50051
CHATCLI_SERVER_TOKENServer authentication token. Empty = no authentication.""
CHATCLI_SERVER_TLS_CERTPath to the server TLS certificate.""
CHATCLI_SERVER_TLS_KEYPath to the server TLS key.""
CHATCLI_GRPC_REFLECTIONEnables gRPC reflection for debugging. Keep disabled in production.false

Provider Fallback

VariableDescriptionDefault
CHATCLI_FALLBACK_PROVIDERSComma-separated list of providers for automatic failover. E.g.: OPENAI,CLAUDEAI,GOOGLEAI.""
CHATCLI_FALLBACK_MODEL_<PROVIDER>Specific model per provider in the chain. E.g.: CHATCLI_FALLBACK_MODEL_CLAUDEAI=claude-sonnet-4-20250514.(default model)
CHATCLI_FALLBACK_MAX_RETRIESRetries per provider before advancing to the next in the chain.2
CHATCLI_FALLBACK_COOLDOWN_BASEBase cooldown duration after a provider failure.30s
CHATCLI_FALLBACK_COOLDOWN_MAXMaximum cooldown duration (exponential backoff).5m
For complete details, see the Provider Fallback documentation.

MCP (Model Context Protocol)

VariableDescriptionDefault
CHATCLI_MCP_ENABLEDEnables the MCP server manager.false
CHATCLI_MCP_CONFIGPath to the MCP server configuration JSON file.~/.chatcli/mcp_servers.json
For complete details, see the MCP documentation.

Bootstrap and Memory

VariableDescriptionDefault
CHATCLI_BOOTSTRAP_ENABLEDEnables loading bootstrap files (SOUL.md, USER.md, etc.) into the system prompt.false
CHATCLI_BOOTSTRAP_DIRDirectory containing bootstrap files.~/.chatcli/bootstrap/
CHATCLI_MEMORY_ENABLEDEnables the persistent memory system (MEMORY.md + daily notes).false
CHATCLI_SAFETY_ENABLEDEnables configurable safety rules in the agent shell.false
For complete details, see the Bootstrap and Memory documentation.

Skill Registry (Multi-Registry)

VariableDescriptionDefault
CHATCLI_REGISTRY_URLSAdditional registry URLs separated by comma. Each URL is added as an enabled custom registry.""
CHATCLI_REGISTRY_DISABLERegistry names to disable, separated by comma. E.g.: clawhub,chatcli.""
CHATCLI_SKILL_INSTALL_DIRDirectory where skills installed via registry are saved.~/.chatcli/skills
The registry system is configured via the ~/.chatcli/registries.yaml file (automatically created with default registries: chatcli and clawhub). The variables above serve as overrides.
For complete details, see the Skill Registry documentation.

Security and Control

VariableDescriptionDefault
CHATCLI_DISABLE_VERSION_CHECKDisables automatic version check on startup. Useful for air-gapped environments or CI/CD.false
CHATCLI_GRPC_REFLECTIONEnables gRPC server reflection (exposes service schema).false
For complete details on security, see the Security and Hardening documentation.

Remote Client Configuration (chatcli connect)

VariableDescriptionDefault
CHATCLI_REMOTE_ADDRRemote server address (host:port).""
CHATCLI_REMOTE_TOKENAuthentication token to connect to the server.""
CHATCLI_CLIENT_API_KEYYour own API key/OAuth token, sent to the server.""

K8s Watcher Configuration (chatcli watch / chatcli server --watch-*)

VariableDescriptionDefault
CHATCLI_WATCH_DEPLOYMENTName of the Kubernetes deployment to monitor.""
CHATCLI_WATCH_NAMESPACEDeployment namespace."default"
CHATCLI_WATCH_INTERVALInterval between data collections. Accepts Go durations (e.g., 10s, 1m)."30s"
CHATCLI_WATCH_WINDOWTime window of data kept in memory."2h"
CHATCLI_WATCH_MAX_LOG_LINESMaximum number of log lines collected per pod.100
CHATCLI_KUBECONFIGPath to kubeconfig (optional, uses default if not set).Auto-detected