This page lists all environment variables that ChatCLI recognizes. Configure them in your .env file or via export in the shell.
General
| Variable | Description | Default |
|---|
LLM_PROVIDER | Active provider: OPENAI, CLAUDEAI, GOOGLEAI, XAI, COPILOT, STACKSPOT, OLLAMA | Auto-detected |
CHATCLI_DOTENV | Custom path for the .env file | .env |
CHATCLI_LANG | Force interface language (pt-BR, en) | Auto-detected |
CHATCLI_IGNORE | Path to .chatignore rules file | Auto-detected |
LOG_LEVEL | Log level: debug, info, warn, error | info |
LOG_FILE | Log file path | ~/.chatcli/app.log |
LOG_MAX_SIZE | Maximum log size before rotation | 100MB |
ENV | Display mode: dev (terminal + file), prod (file only) | dev |
MAX_RETRIES | Maximum retries for API calls | 5 |
INITIAL_BACKOFF | Initial time between retries (seconds) | 3 |
HISTORY_FILE | Path to history file (supports ~) | .chatcli_history |
HISTORY_MAX_SIZE | Maximum history size before rotation | 100MB |
LLM Providers
OpenAI
| Variable | Description | Default |
|---|
OPENAI_API_KEY | API key | — |
OPENAI_MODEL | Model to use | gpt-4o-mini |
OPENAI_ASSISTANT_MODEL | Model for assistant (agent mode) | gpt-4o-mini |
OPENAI_MAX_TOKENS | Response token limit | 60000 |
OPENAI_USE_RESPONSES | Use Responses API (e.g., for gpt-5) | false |
Anthropic (Claude)
| Variable | Description | Default |
|---|
ANTHROPIC_API_KEY | API key | — |
ANTHROPIC_MODEL | Model to use | claude-sonnet-4-5 |
ANTHROPIC_MAX_TOKENS | Response token limit | 20000 |
ANTHROPIC_API_VERSION | API version | 2023-06-01 |
Google AI (Gemini)
| Variable | Description | Default |
|---|
GOOGLEAI_API_KEY | API key | — |
GOOGLEAI_MODEL | Model to use | gemini-2.5-flash |
GOOGLEAI_MAX_TOKENS | Response token limit | 50000 |
xAI (Grok)
| Variable | Description | Default |
|---|
XAI_API_KEY | API key | — |
XAI_MODEL | Model to use | grok-4-latest |
XAI_MAX_TOKENS | Response token limit | 50000 |
Ollama (Local Models)
| Variable | Description | Default |
|---|
OLLAMA_ENABLED | Enable Ollama API (required) | false |
OLLAMA_BASE_URL | Ollama server URL | http://localhost:11434 |
OLLAMA_MODEL | Model to use | — |
OLLAMA_MAX_TOKENS | Response token limit | 5000 |
OLLAMA_FILTER_THINKING | Filters intermediate reasoning from models like Qwen3 | true |
For Agent mode to work well with some Ollama models that “think out loud” (Qwen3, Llama3…), keep OLLAMA_FILTER_THINKING=true.
GitHub Copilot
| Variable | Description | Default |
|---|
GITHUB_COPILOT_TOKEN | Authentication token | — |
COPILOT_MODEL | Model to use | — |
COPILOT_MAX_TOKENS | Response token limit | — |
COPILOT_API_BASE_URL | API base URL | — |
CHATCLI_COPILOT_CLIENT_ID | Custom client ID | — |
StackSpot
| Variable | Description | Default |
|---|
CLIENT_ID | Client ID | — |
CLIENT_KEY | Client Key | — |
STACKSPOT_REALM | Realm/Tenant | — |
STACKSPOT_AGENT_ID | Agent ID | — |
Agent Mode
| Variable | Description | Default |
|---|
CHATCLI_AGENT_CMD_TIMEOUT | Timeout per executed command (Go duration: 30s, 2m, 10m) | 10m |
CHATCLI_AGENT_DENYLIST | Extra regex patterns to block commands (separated by ;) | — |
CHATCLI_AGENT_ALLOW_SUDO | Allow sudo without automatic blocking | false |
CHATCLI_AGENT_PLUGIN_MAX_TURNS | Maximum agent turns | 50 |
CHATCLI_AGENT_PLUGIN_TIMEOUT | Total agent plugin timeout | 15m |
Multi-Agent (Parallel Orchestration)
| Variable | Description | Default |
|---|
CHATCLI_AGENT_PARALLEL_MODE | Enable parallel multi-agent orchestration | true |
CHATCLI_AGENT_MAX_WORKERS | Maximum simultaneous workers (goroutines) | 4 |
CHATCLI_AGENT_WORKER_MAX_TURNS | Maximum turns per worker | 10 |
CHATCLI_AGENT_WORKER_TIMEOUT | Timeout per individual worker | 5m |
Coder Mode
| Variable | Description | Default |
|---|
CHATCLI_CODER_UI | UI style: full or minimal | full |
CHATCLI_CODER_BANNER | Show command cheat sheet | true |
Provider Fallback
| Variable | Description | Default |
|---|
CHATCLI_FALLBACK_PROVIDERS | Comma-separated list of providers | — |
CHATCLI_FALLBACK_MODEL_<PROVIDER> | Specific model per provider in the chain | — |
CHATCLI_FALLBACK_MAX_RETRIES | Retries per provider before advancing | 2 |
CHATCLI_FALLBACK_COOLDOWN_BASE | Base cooldown after failure | 30s |
CHATCLI_FALLBACK_COOLDOWN_MAX | Maximum cooldown (exponential backoff) | 5m |
MCP (Model Context Protocol)
| Variable | Description | Default |
|---|
CHATCLI_MCP_ENABLED | Enable MCP manager | false |
CHATCLI_MCP_CONFIG | Path to MCP configuration JSON | ~/.chatcli/mcp_servers.json |
Bootstrap and Memory
| Variable | Description | Default |
|---|
CHATCLI_BOOTSTRAP_ENABLED | Enable bootstrap file loading | false |
CHATCLI_BOOTSTRAP_DIR | Bootstrap files directory | — |
CHATCLI_MEMORY_ENABLED | Enable persistent memory system | false |
Metrics and Observability
| Variable | Description | Default |
|---|
CHATCLI_METRICS_PORT | HTTP port to export Prometheus metrics (0 = disabled) | 9090 |
Security
| Variable | Description | Default |
|---|
CHATCLI_SAFETY_ENABLED | Enable configurable safety rules | false |
CHATCLI_GRPC_REFLECTION | Enable gRPC reflection on server (use only in dev) | false |
CHATCLI_DISABLE_VERSION_CHECK | Disable automatic version check | false |
CHATCLI_LATEST_VERSION_URL | Custom URL for version check | GitHub API |
OAuth
| Variable | Description | Default |
|---|
CHATCLI_OPENAI_CLIENT_ID | Override OpenAI OAuth client ID | — |
Remote Server
| Variable | Description | Default |
|---|
CHATCLI_SERVER_PORT | gRPC server port | 50051 |
CHATCLI_SERVER_TOKEN | Authentication token | — |
CHATCLI_SERVER_TLS_CERT | TLS certificate path | — |
CHATCLI_SERVER_TLS_KEY | TLS key path | — |
Remote Client
| Variable | Description | Default |
|---|
CHATCLI_REMOTE_ADDR | Remote server address | — |
CHATCLI_REMOTE_TOKEN | Authentication token | — |
CHATCLI_CLIENT_API_KEY | Your API key (sent to server) | — |
K8s Watcher
| Variable | Description | Default |
|---|
CHATCLI_WATCH_DEPLOYMENT | Single deployment (legacy) | — |
CHATCLI_WATCH_NAMESPACE | Deployment namespace | default |
CHATCLI_WATCH_INTERVAL | Collection interval | 30s |
CHATCLI_WATCH_WINDOW | Observation window | 2h |
CHATCLI_WATCH_MAX_LOG_LINES | Maximum log lines per pod | 100 |
CHATCLI_WATCH_CONFIG | Path to multi-target YAML config | — |
CHATCLI_KUBECONFIG | Kubeconfig path | Auto-detected |
Complete .env Example
# Geral
LOG_LEVEL=info
CHATCLI_LANG=pt-BR
ENV=prod
LLM_PROVIDER=CLAUDEAI
# Provedor principal
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxxxxxx
ANTHROPIC_MODEL=claude-sonnet-4-5
ANTHROPIC_MAX_TOKENS=20000
# Fallback
CHATCLI_FALLBACK_PROVIDERS=CLAUDEAI,OPENAI,GOOGLEAI
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx
GOOGLEAI_API_KEY=AIzaxxxxxxxxxxxxxxxxxxxxxxxx
# Agente
CHATCLI_AGENT_CMD_TIMEOUT=2m
CHATCLI_AGENT_ALLOW_SUDO=false
# Multi-Agent
CHATCLI_AGENT_PARALLEL_MODE=true
CHATCLI_AGENT_MAX_WORKERS=4
# Bootstrap e Memória
CHATCLI_BOOTSTRAP_ENABLED=true
CHATCLI_MEMORY_ENABLED=true