AI Providers Guide
Complete guide to configuring and using different AI providers for intelligent commit message generation.
Supported AI Providers
Claude (Anthropic)
Available Methods:
Setup:
Popular Models:
claude-3-5-sonnet-20241022
claude-3-5-haiku-20241022
claude-3-opus-20240229
OpenAI
Available Methods:
Setup:
Popular Models:
gpt-4o
gpt-4o-mini
gpt-4-turbo
Google Gemini
Available Methods:
Setup:
Popular Models:
gemini-1.5-pro
gemini-1.5-flash
gemini-1.0-pro
System/Local
Available Methods:
Setup:
Popular Models:
ollama/llama3.1
ollama/codellama
custom-script
Setup Instructions
Anthropic Claude
Method 1: CLI (Recommended)
Method 2: API
OpenAI GPT
Method 1: CLI
Method 2: API
Google Gemini
API Setup
System AI
Uses system-provided AI tools or local models. No additional setup required.
Model Selection Guide
Claude Models
claude-3-sonnet-20240229
Balanced performance and cost. Best for most development workflows.
claude-3-haiku-20240307
Fastest and most cost-effective. Good for simple commits.
claude-3-opus-20240229
Highest quality analysis. Best for complex, critical commits.
OpenAI Models
gpt-4
High-quality analysis with good reasoning capabilities.
gpt-4-turbo
Faster version of GPT-4 with similar quality.
gpt-3.5-turbo
Fast and cost-effective for simple commit messages.
Interactive Model Selection
Use the interactive model picker to explore available models for your provider:
AI Configuration Tips
Timeout Settings
Adjust AI timeout based on your network and model speed
ai.timeout: 45 for slower models or networks
Chunk Size Optimization
Larger chunks provide more context but may hit token limits
ai.chunk_size: 35000 for complex diffs, 20000 for simple changes
Fallback Configuration
Enable fallbacks for reliable operation when AI fails
ai.fallbacks.minimal_diff: true and ai.fallbacks.stats_mode: true
Validation Settings
Enable validation to ensure AI responses meet format requirements
ai.validate_on_change: true to catch malformed responses
Cost Optimization
Use faster, cheaper models for simple commits
Switch to claude-3-haiku or gpt-3.5-turbo for routine changes
Offline Fallback
Configure offline templates for when AI is unavailable
Customize ai.offline_template.subject and ai.offline_template.bullets
AI Troubleshooting
Problem: API key authentication failed
Solution: Check your API key in .lfg/.env and ensure it has proper permissions. For CLI methods, re-authenticate with the provider's CLI tool.
Problem: AI request timeout
Solution: Increase ai.timeout in config.yml or reduce ai.chunk_size for faster processing.
Problem: Token limit exceeded
Solution: Reduce ai.max_prompt_chars or ai.chunk_size. Enable ai.fallbacks.minimal_diff for large diffs.
Problem: Invalid AI response format
Solution: Enable ai.validate_on_change to catch format issues. Check AI provider status and model availability.
Problem: CLI tool not found
Solution: Install the provider's CLI tool (claude-cli, openai-cli) and ensure it's in your PATH.
Problem: Rate limiting
Solution: Switch to a different model tier or implement delays between requests. Consider using offline mode for high-frequency commits.