Skip to main content
Found a problem? This page lists the most common errors and how to fix them.
For more detailed diagnostics, enable debug logging: LOG_LEVEL=debug chatcli

Common Problems

Symptoms: bash: chatcli: command not found or zsh: command not found: chatcli.Cause: The Go binary directory is not in your system’s PATH.Solution:
  1. Open your shell configuration file (~/.bashrc, ~/.zshrc, etc.)
  2. Add the following to the end of the file:
export PATH=$PATH:$(go env GOPATH)/bin
  1. Restart your terminal or run source ~/.zshrc
Symptoms: ChatCLI terminates immediately after starting.Cause: No API key has been configured in the .env file.Solution:
  1. Create or open the .env file in the directory where you run chatcli (or in your HOME)
  2. Add credentials for at least one provider:
LLM_PROVIDER=OPENAI
OPENAI_API_KEY="sk-your-secret-key-here"
  1. Save and run chatcli again
Symptoms: You changed LLM_PROVIDER or another value, but ChatCLI uses the old configuration.Cause: ChatCLI loads configuration at startup.Solution: Use the /reload command within interactive mode to reload variables instantly.
/reload
Symptoms: Error “file does not exist” or “path not found”.Solution:
  • Relative Paths — are relative to the directory where you ran chatcli. E.g.: @file ./project/src/main.go
  • Home (~) — works as a shortcut: @file ~/documents/notes.txt
  • Permissions — check with ls -l that you have read permission on the file
Symptoms: The AI presents an action plan but just waits without executing.Cause: This is the expected behavior! Agent Mode requires explicit approval.Solution:
  • Type the command number (e.g., 1) to execute it individually
  • Type a to execute all commands in sequence
  • Use --agent-auto-exec in one-shot mode for automatic execution of safe commands
Symptoms: Even with OLLAMA_ENABLED=true, the provider does not appear available.Solution:
  1. Ollama Server — confirm it is running: ollama serve
  2. Model downloaded — check with ollama list. If empty, download one: ollama pull llama3
  3. Base URL — if not at the default address, set it in the .env:
OLLAMA_BASE_URL="http://localhost:11434"
Symptoms: The AI response stops in the middle of a sentence.Cause: The response token limit was reached.Solution: Increase the MAX_TOKENS for your provider in the .env:
OPENAI_MAX_TOKENS=60000
ANTHROPIC_MAX_TOKENS=20000
GOOGLEAI_MAX_TOKENS=50000
Symptoms: Timeout error or the application hangs waiting for a response.Solution:
  • Use --timeout to set a limit: chatcli -p "question" --timeout 60s
  • Configure MAX_RETRIES and INITIAL_BACKOFF in the .env for automatic retries
  • Consider configuring Provider Fallback for redundancy

Still having problems?

Debug Logs

Run with LOG_LEVEL=debug and check ~/.chatcli/app.log for full error details.

Open an Issue

Include: version (chatcli --version), OS, Go version, steps to reproduce, and relevant logs.