.env file in the project root or in your HOME directory.
Priority Order
General Configuration
| Variable | Description | Default |
|---|---|---|
ENV | Sets the environment. If dev, logs are shown in the terminal and saved to the app log; if prod, only to the log. Valid values: dev, prod | dev |
LLM_PROVIDER | Sets the default AI provider to use. Valid values: OPENAI, CLAUDEAI, GOOGLEAI, XAI, OLLAMA, STACKSPOT, COPILOT. | "OPENAI" |
CHATCLI_LANG | Sets the interface language. Values: pt-BR, en-US. If not set, it will attempt to detect the system language. | en-US |
LOG_LEVEL | Log level. Options: debug, info, warn, error. | "info" |
LOG_FILE | Path to the log file. Default: $HOME/.chatcli/app.log | "$HOME/.chatcli/app.log" |
LOG_MAX_SIZE | Maximum log file size before rotation. Accepts 100MB, 50KB, etc. | "100MB" |
HISTORY_MAX_SIZE | Maximum history file (.chatcli_history) size before rotation. | "100MB" |
HISTORY_FILE | Custom path for the history file (supports ~; by default it creates the history where chatcli was executed). | ".chatcli_history" |
CHATCLI_DOTENV | Custom path for your .env file. | ".env" |
CHATCLI_IGNORE | Path to ignore file (e.g., .chatignore). When set, it takes priority over project/global ignore. | "" |
CHATCLI_CODER_UI | /coder mode UI style: full (default) or minimal. | "full" |
CHATCLI_CODER_BANNER | Display the /coder quick cheat sheet when entering the session (true/false). | "true" |
OAuth Authentication
In addition to traditional API keys, ChatCLI supports OAuth authentication for OpenAI, Anthropic, and GitHub Copilot. With OAuth, you can use your existing plan (ChatGPT Plus, Codex, Claude Pro, GitHub Copilot) without generating API keys.| Variable | Description | Default |
|---|---|---|
CHATCLI_AUTH_DIR | Directory where OAuth credentials are stored. | ~/.chatcli/ |
CHATCLI_OPENAI_CLIENT_ID | Allows overriding the OpenAI OAuth client ID. | (internal) |
CHATCLI_COPILOT_CLIENT_ID | Allows overriding the GitHub Copilot OAuth client ID. | (internal) |
~/.chatcli/auth-profiles.json. The encryption key is automatically generated and saved in ~/.chatcli/.auth-key (permission 0600).
Use/auth login openai-codex,/auth login anthropic, or/auth login github-copilotin interactive mode to start the OAuth flow. See the full OAuth documentation for more details.
Provider Configuration
OpenAI
| Variable | Description | Required? |
|---|---|---|
OPENAI_API_KEY | Your secret OpenAI API key. Alternative: use /auth login openai-codex for OAuth. | Yes* |
OPENAI_MODEL | The model to use. E.g.: gpt-4o, gpt-4o-mini, gpt-4-turbo. | No |
OPENAI_ASSISTANT_MODEL | The model to use specifically for the Assistants API. | No |
OPENAI_USE_RESPONSES | Set to true to use the v1/responses API instead of v1/chat/completions. | No |
OPENAI_MAX_TOKENS | Sets the maximum tokens to use in the session (depends on model) | No |
Anthropic (Claude)
| Variable | Description | Required? |
|---|---|---|
ANTHROPIC_API_KEY | Your secret Anthropic API key. Alternative: use /auth login anthropic for OAuth. | Yes* |
ANTHROPIC_MODEL | The model to use. E.g.: claude-sonnet-4-5, claude-opus-4-6, claude-sonnet-4. | No |
ANTHROPIC_API_VERSION | The Anthropic API version to use in headers. | No |
ANTHROPIC_MAX_TOKENS | Sets the maximum tokens to use in the session (depends on model) | No |
Google (Gemini)
| Variable | Description | Required? |
|---|---|---|
GOOGLEAI_API_KEY | Your Google AI Studio API key. | Yes |
GOOGLEAI_MODEL | The model to use. E.g.: gemini-2.5-pro, gemini-2.5-flash. | No |
GOOGLEAI_MAX_TOKENS | Sets the maximum tokens to use in the session (depends on model) | No |
xAI (Grok)
| Variable | Description | Required? |
|---|---|---|
XAI_API_KEY | Your secret xAI API key. | Yes |
XAI_MODEL | The model to use. E.g.: grok-4-fast, grok-3. | No |
XAI_MAX_TOKENS | Sets the maximum tokens to use in the session (depends on model) | No |
Ollama (Local)
| Variable | Description | Required? |
|---|---|---|
OLLAMA_ENABLED | Set to true to enable the Ollama provider. | Yes |
OLLAMA_BASE_URL | Base URL of your local Ollama server. | No |
OLLAMA_MODEL | The name of the local model to use (e.g., llama3, codellama). | No |
OLLAMA_FILTER_THINKING | Filters intermediate reasoning in responses (e.g., for Qwen3, llama3 default true…). | No |
OLLAMA_MAX_TOKENS | Sets the maximum tokens for the Ollama provider. | No |
StackSpot
| Variable | Description | Required? |
|---|---|---|
CLIENT_ID | StackSpot client ID credential. | Yes |
CLIENT_KEY | StackSpot client key credential. | Yes |
STACKSPOT_REALM | Your organization’s realm (tenant) on StackSpot. | Yes |
STACKSPOT_AGENT_ID | The ID of the specific agent to use. | Yes |
GitHub Copilot
| Variable | Description | Required? |
|---|---|---|
GITHUB_COPILOT_TOKEN | GitHub Copilot OAuth token. Alternative: use /auth login github-copilot for Device Flow. | Yes* |
COPILOT_MODEL | The model to use. E.g.: gpt-4o, claude-sonnet-4, gemini-2.0-flash. | No |
COPILOT_MAX_TOKENS | Sets the maximum tokens for the response. | No |
COPILOT_API_BASE_URL | Copilot API base URL (for enterprise environments). | No |
* For OpenAI, Anthropic, and GitHub Copilot, the API key is required only if you are not using OAuth authentication (/auth login). Both methods can coexist.
Agent Mode Configuration
| Variable | Description |
|---|---|
CHATCLI_AGENT_ALLOW_SUDO | Set to "true" to allow the agent to suggest and execute commands with sudo. Use with extreme caution. |
CHATCLI_AGENT_DENYLIST | List of regex patterns (separated by ;) to block additional commands in agent mode. |
CHATCLI_AGENT_CMD_TIMEOUT | Timeout for a single command execution by the agent (default: 10m, maximum: 1h). |
CHATCLI_AGENT_PLUGIN_MAX_TURNS | Maximum agent turn limit in /agent//coder mode (default: 50, maximum: 200). |
CHATCLI_AGENT_PLUGIN_TIMEOUT | Total agent plugin timeout (default: 15m). |
Multi-Agent (Parallel Orchestration)
| Variable | Description | Default |
|---|---|---|
CHATCLI_AGENT_PARALLEL_MODE | Enables multi-agent mode with parallel orchestration. The orchestrator LLM dispatches specialist agents in parallel. | false |
CHATCLI_AGENT_MAX_WORKERS | Maximum number of workers (goroutines) executing agents simultaneously. | 4 |
CHATCLI_AGENT_WORKER_MAX_TURNS | Maximum turns in each worker agent’s mini ReAct loop. | 10 |
CHATCLI_AGENT_WORKER_TIMEOUT | Timeout per individual worker agent. Accepts Go durations (e.g., 30s, 2m, 10m). | 5m |
For complete details on the multi-agent system, see the Multi-Agent Orchestration documentation.
Server Mode Configuration (chatcli server)
| Variable | Description | Default |
|---|---|---|
CHATCLI_SERVER_PORT | gRPC server port. | 50051 |
CHATCLI_SERVER_TOKEN | Server authentication token. Empty = no authentication. | "" |
CHATCLI_SERVER_TLS_CERT | Path to the server TLS certificate. | "" |
CHATCLI_SERVER_TLS_KEY | Path to the server TLS key. | "" |
CHATCLI_GRPC_REFLECTION | Enables gRPC reflection for debugging. Keep disabled in production. | false |
Provider Fallback
| Variable | Description | Default |
|---|---|---|
CHATCLI_FALLBACK_PROVIDERS | Comma-separated list of providers for automatic failover. E.g.: OPENAI,CLAUDEAI,GOOGLEAI. | "" |
CHATCLI_FALLBACK_MODEL_<PROVIDER> | Specific model per provider in the chain. E.g.: CHATCLI_FALLBACK_MODEL_CLAUDEAI=claude-sonnet-4-20250514. | (default model) |
CHATCLI_FALLBACK_MAX_RETRIES | Retries per provider before advancing to the next in the chain. | 2 |
CHATCLI_FALLBACK_COOLDOWN_BASE | Base cooldown duration after a provider failure. | 30s |
CHATCLI_FALLBACK_COOLDOWN_MAX | Maximum cooldown duration (exponential backoff). | 5m |
For complete details, see the Provider Fallback documentation.
MCP (Model Context Protocol)
| Variable | Description | Default |
|---|---|---|
CHATCLI_MCP_ENABLED | Enables the MCP server manager. | false |
CHATCLI_MCP_CONFIG | Path to the MCP server configuration JSON file. | ~/.chatcli/mcp_servers.json |
For complete details, see the MCP documentation.
Bootstrap and Memory
| Variable | Description | Default |
|---|---|---|
CHATCLI_BOOTSTRAP_ENABLED | Enables loading bootstrap files (SOUL.md, USER.md, etc.) into the system prompt. | false |
CHATCLI_BOOTSTRAP_DIR | Directory containing bootstrap files. | ~/.chatcli/bootstrap/ |
CHATCLI_MEMORY_ENABLED | Enables the persistent memory system (MEMORY.md + daily notes). | false |
CHATCLI_SAFETY_ENABLED | Enables configurable safety rules in the agent shell. | false |
For complete details, see the Bootstrap and Memory documentation.
Skill Registry (Multi-Registry)
| Variable | Description | Default |
|---|---|---|
CHATCLI_REGISTRY_URLS | Additional registry URLs separated by comma. Each URL is added as an enabled custom registry. | "" |
CHATCLI_REGISTRY_DISABLE | Registry names to disable, separated by comma. E.g.: clawhub,chatcli. | "" |
CHATCLI_SKILL_INSTALL_DIR | Directory where skills installed via registry are saved. | ~/.chatcli/skills |
~/.chatcli/registries.yaml file (automatically created with default registries: chatcli and clawhub). The variables above serve as overrides.
For complete details, see the Skill Registry documentation.
Security and Control
| Variable | Description | Default |
|---|---|---|
CHATCLI_DISABLE_VERSION_CHECK | Disables automatic version check on startup. Useful for air-gapped environments or CI/CD. | false |
CHATCLI_GRPC_REFLECTION | Enables gRPC server reflection (exposes service schema). | false |
For complete details on security, see the Security and Hardening documentation.
Remote Client Configuration (chatcli connect)
| Variable | Description | Default |
|---|---|---|
CHATCLI_REMOTE_ADDR | Remote server address (host:port). | "" |
CHATCLI_REMOTE_TOKEN | Authentication token to connect to the server. | "" |
CHATCLI_CLIENT_API_KEY | Your own API key/OAuth token, sent to the server. | "" |
K8s Watcher Configuration (chatcli watch / chatcli server --watch-*)
| Variable | Description | Default |
|---|---|---|
CHATCLI_WATCH_DEPLOYMENT | Name of the Kubernetes deployment to monitor. | "" |
CHATCLI_WATCH_NAMESPACE | Deployment namespace. | "default" |
CHATCLI_WATCH_INTERVAL | Interval between data collections. Accepts Go durations (e.g., 10s, 1m). | "30s" |
CHATCLI_WATCH_WINDOW | Time window of data kept in memory. | "2h" |
CHATCLI_WATCH_MAX_LOG_LINES | Maximum number of log lines collected per pod. | 100 |
CHATCLI_KUBECONFIG | Path to kubeconfig (optional, uses default if not set). | Auto-detected |