Common Problems
chatcli command not found
chatcli command not found
Symptoms:
bash: chatcli: command not found or zsh: command not found: chatcli.Cause: The Go binary directory is not in your system’s PATH.Solution:- Open your shell configuration file (
~/.bashrc,~/.zshrc, etc.) - Add the following to the end of the file:
- Restart your terminal or run
source ~/.zshrc
Error "No LLM provider is configured"
Error "No LLM provider is configured"
Symptoms: ChatCLI terminates immediately after starting.Cause: No API key has been configured in the
.env file.Solution:- Create or open the
.envfile in the directory where you runchatcli(or in yourHOME) - Add credentials for at least one provider:
- Save and run
chatcliagain
Changes to .env have no effect
Changes to .env have no effect
Symptoms: You changed
LLM_PROVIDER or another value, but ChatCLI uses the old configuration.Cause: ChatCLI loads configuration at startup.Solution: Use the /reload command within interactive mode to reload variables instantly.@file cannot find a file/directory
@file cannot find a file/directory
Symptoms: Error “file does not exist” or “path not found”.Solution:
- Relative Paths — are relative to the directory where you ran
chatcli. E.g.:@file ./project/src/main.go - Home (
~) — works as a shortcut:@file ~/documents/notes.txt - Permissions — check with
ls -lthat you have read permission on the file
Agent Mode does not execute anything
Agent Mode does not execute anything
Symptoms: The AI presents an action plan but just waits without executing.Cause: This is the expected behavior! Agent Mode requires explicit approval.Solution:
- Type the command number (e.g.,
1) to execute it individually - Type
ato execute all commands in sequence - Use
--agent-auto-execin one-shot mode for automatic execution of safe commands
Ollama provider is not detected
Ollama provider is not detected
Symptoms: Even with
OLLAMA_ENABLED=true, the provider does not appear available.Solution:- Ollama Server — confirm it is running:
ollama serve - Model downloaded — check with
ollama list. If empty, download one:ollama pull llama3 - Base URL — if not at the default address, set it in the
.env:
Truncated or incomplete responses
Truncated or incomplete responses
Symptoms: The AI response stops in the middle of a sentence.Cause: The response token limit was reached.Solution: Increase the
MAX_TOKENS for your provider in the .env:API call timeout
API call timeout
Symptoms: Timeout error or the application hangs waiting for a response.Solution:
- Use
--timeoutto set a limit:chatcli -p "question" --timeout 60s - Configure
MAX_RETRIESandINITIAL_BACKOFFin the.envfor automatic retries - Consider configuring Provider Fallback for redundancy
Still having problems?
Debug Logs
Run with
LOG_LEVEL=debug and check ~/.chatcli/app.log for full error details.Open an Issue
Include: version (
chatcli --version), OS, Go version, steps to reproduce, and relevant logs.