Skip to main content
This page lists all environment variables that ChatCLI recognizes. Configure them in your .env file or via export in the shell.

General

VariableDescriptionDefault
LLM_PROVIDERActive provider: OPENAI, CLAUDEAI, GOOGLEAI, XAI, COPILOT, STACKSPOT, OLLAMAAuto-detected
CHATCLI_DOTENVCustom path for the .env file.env
CHATCLI_LANGForce interface language (pt-BR, en)Auto-detected
CHATCLI_IGNOREPath to .chatignore rules fileAuto-detected
LOG_LEVELLog level: debug, info, warn, errorinfo
LOG_FILELog file path~/.chatcli/app.log
LOG_MAX_SIZEMaximum log size before rotation100MB
ENVDisplay mode: dev (terminal + file), prod (file only)dev
MAX_RETRIESMaximum retries for API calls5
INITIAL_BACKOFFInitial time between retries (seconds)3
HISTORY_FILEPath to history file (supports ~).chatcli_history
HISTORY_MAX_SIZEMaximum history size before rotation100MB

LLM Providers

OpenAI

VariableDescriptionDefault
OPENAI_API_KEYAPI key
OPENAI_MODELModel to usegpt-4o-mini
OPENAI_ASSISTANT_MODELModel for assistant (agent mode)gpt-4o-mini
OPENAI_MAX_TOKENSResponse token limit60000
OPENAI_USE_RESPONSESUse Responses API (e.g., for gpt-5)false

Anthropic (Claude)

VariableDescriptionDefault
ANTHROPIC_API_KEYAPI key
ANTHROPIC_MODELModel to useclaude-sonnet-4-5
ANTHROPIC_MAX_TOKENSResponse token limit20000
ANTHROPIC_API_VERSIONAPI version2023-06-01

Google AI (Gemini)

VariableDescriptionDefault
GOOGLEAI_API_KEYAPI key
GOOGLEAI_MODELModel to usegemini-2.5-flash
GOOGLEAI_MAX_TOKENSResponse token limit50000

xAI (Grok)

VariableDescriptionDefault
XAI_API_KEYAPI key
XAI_MODELModel to usegrok-4-latest
XAI_MAX_TOKENSResponse token limit50000

Ollama (Local Models)

VariableDescriptionDefault
OLLAMA_ENABLEDEnable Ollama API (required)false
OLLAMA_BASE_URLOllama server URLhttp://localhost:11434
OLLAMA_MODELModel to use
OLLAMA_MAX_TOKENSResponse token limit5000
OLLAMA_FILTER_THINKINGFilters intermediate reasoning from models like Qwen3true
For Agent mode to work well with some Ollama models that “think out loud” (Qwen3, Llama3…), keep OLLAMA_FILTER_THINKING=true.

GitHub Copilot

VariableDescriptionDefault
GITHUB_COPILOT_TOKENAuthentication token
COPILOT_MODELModel to use
COPILOT_MAX_TOKENSResponse token limit
COPILOT_API_BASE_URLAPI base URL
CHATCLI_COPILOT_CLIENT_IDCustom client ID

StackSpot

VariableDescriptionDefault
CLIENT_IDClient ID
CLIENT_KEYClient Key
STACKSPOT_REALMRealm/Tenant
STACKSPOT_AGENT_IDAgent ID

Agent Mode

VariableDescriptionDefault
CHATCLI_AGENT_CMD_TIMEOUTTimeout per executed command (Go duration: 30s, 2m, 10m)10m
CHATCLI_AGENT_DENYLISTExtra regex patterns to block commands (separated by ;)
CHATCLI_AGENT_ALLOW_SUDOAllow sudo without automatic blockingfalse
CHATCLI_AGENT_PLUGIN_MAX_TURNSMaximum agent turns50
CHATCLI_AGENT_PLUGIN_TIMEOUTTotal agent plugin timeout15m

Multi-Agent (Parallel Orchestration)

VariableDescriptionDefault
CHATCLI_AGENT_PARALLEL_MODEEnable parallel multi-agent orchestrationtrue
CHATCLI_AGENT_MAX_WORKERSMaximum simultaneous workers (goroutines)4
CHATCLI_AGENT_WORKER_MAX_TURNSMaximum turns per worker10
CHATCLI_AGENT_WORKER_TIMEOUTTimeout per individual worker5m

Coder Mode

VariableDescriptionDefault
CHATCLI_CODER_UIUI style: full or minimalfull
CHATCLI_CODER_BANNERShow command cheat sheettrue

Provider Fallback

VariableDescriptionDefault
CHATCLI_FALLBACK_PROVIDERSComma-separated list of providers
CHATCLI_FALLBACK_MODEL_<PROVIDER>Specific model per provider in the chain
CHATCLI_FALLBACK_MAX_RETRIESRetries per provider before advancing2
CHATCLI_FALLBACK_COOLDOWN_BASEBase cooldown after failure30s
CHATCLI_FALLBACK_COOLDOWN_MAXMaximum cooldown (exponential backoff)5m

MCP (Model Context Protocol)

VariableDescriptionDefault
CHATCLI_MCP_ENABLEDEnable MCP managerfalse
CHATCLI_MCP_CONFIGPath to MCP configuration JSON~/.chatcli/mcp_servers.json

Bootstrap and Memory

VariableDescriptionDefault
CHATCLI_BOOTSTRAP_ENABLEDEnable bootstrap file loadingfalse
CHATCLI_BOOTSTRAP_DIRBootstrap files directory
CHATCLI_MEMORY_ENABLEDEnable persistent memory systemfalse

Metrics and Observability

VariableDescriptionDefault
CHATCLI_METRICS_PORTHTTP port to export Prometheus metrics (0 = disabled)9090

Security

VariableDescriptionDefault
CHATCLI_SAFETY_ENABLEDEnable configurable safety rulesfalse
CHATCLI_GRPC_REFLECTIONEnable gRPC reflection on server (use only in dev)false
CHATCLI_DISABLE_VERSION_CHECKDisable automatic version checkfalse
CHATCLI_LATEST_VERSION_URLCustom URL for version checkGitHub API

OAuth

VariableDescriptionDefault
CHATCLI_OPENAI_CLIENT_IDOverride OpenAI OAuth client ID

Remote Server

VariableDescriptionDefault
CHATCLI_SERVER_PORTgRPC server port50051
CHATCLI_SERVER_TOKENAuthentication token
CHATCLI_SERVER_TLS_CERTTLS certificate path
CHATCLI_SERVER_TLS_KEYTLS key path

Remote Client

VariableDescriptionDefault
CHATCLI_REMOTE_ADDRRemote server address
CHATCLI_REMOTE_TOKENAuthentication token
CHATCLI_CLIENT_API_KEYYour API key (sent to server)

K8s Watcher

VariableDescriptionDefault
CHATCLI_WATCH_DEPLOYMENTSingle deployment (legacy)
CHATCLI_WATCH_NAMESPACEDeployment namespacedefault
CHATCLI_WATCH_INTERVALCollection interval30s
CHATCLI_WATCH_WINDOWObservation window2h
CHATCLI_WATCH_MAX_LOG_LINESMaximum log lines per pod100
CHATCLI_WATCH_CONFIGPath to multi-target YAML config
CHATCLI_KUBECONFIGKubeconfig pathAuto-detected

Complete .env Example

# Geral
LOG_LEVEL=info
CHATCLI_LANG=pt-BR
ENV=prod
LLM_PROVIDER=CLAUDEAI

# Provedor principal
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxxxxxx
ANTHROPIC_MODEL=claude-sonnet-4-5
ANTHROPIC_MAX_TOKENS=20000

# Fallback
CHATCLI_FALLBACK_PROVIDERS=CLAUDEAI,OPENAI,GOOGLEAI
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx
GOOGLEAI_API_KEY=AIzaxxxxxxxxxxxxxxxxxxxxxxxx

# Agente
CHATCLI_AGENT_CMD_TIMEOUT=2m
CHATCLI_AGENT_ALLOW_SUDO=false

# Multi-Agent
CHATCLI_AGENT_PARALLEL_MODE=true
CHATCLI_AGENT_MAX_WORKERS=4

# Bootstrap e Memória
CHATCLI_BOOTSTRAP_ENABLED=true
CHATCLI_MEMORY_ENABLED=true