AI Providers Guide

Complete guide to configuring and using different AI providers for intelligent commit message generation.

Supported AI Providers

Claude (Anthropic)

Available Methods:

CLI API

Setup:

CLI: Install claude-cli and authenticate
API: Set ANTHROPIC_API_KEY in .lfg/.env

Popular Models:

claude-3-5-sonnet-20241022 claude-3-5-haiku-20241022 claude-3-opus-20240229

OpenAI

Available Methods:

CLI API

Setup:

CLI: Install openai CLI and authenticate
API: Set OPENAI_API_KEY in .lfg/.env

Popular Models:

gpt-4o gpt-4o-mini gpt-4-turbo

Google Gemini

Available Methods:

CLI API

Setup:

CLI: Install gcloud CLI and authenticate
API: Set GEMINI_API_KEY in .lfg/.env

Popular Models:

gemini-1.5-pro gemini-1.5-flash gemini-1.0-pro

System/Local

Available Methods:

System

Setup:

System: Use local AI tools or custom scripts

Popular Models:

ollama/llama3.1 ollama/codellama custom-script

Setup Instructions

Anthropic Claude

Method 1: CLI (Recommended)

Claude CLI Setup
$
npm install -g @anthropic-ai/claude-cli
$
claude auth
$
lfg --set-provider claude --method cli
✅ Provider: claude
✅ Method: cli

Method 2: API

Claude API Setup
$
echo "ANTHROPIC_API_KEY=your_api_key_here" >> .lfg/.env
$
lfg --set-provider claude --method api
$
lfg --model claude-3-sonnet-20240229
✅ Model: claude-3-sonnet-20240229

OpenAI GPT

Method 1: CLI

OpenAI CLI Setup
$
pip install openai-cli
$
openai auth
$
lfg --set-provider openai --method cli
✅ Provider: openai

Method 2: API

OpenAI API Setup
$
echo "OPENAI_API_KEY=your_api_key_here" >> .lfg/.env
$
lfg --set-provider openai --method api --model gpt-4
✅ Provider: openai
✅ Model: gpt-4

Google Gemini

API Setup

Gemini API Setup
$
echo "GEMINI_API_KEY=your_api_key_here" >> .lfg/.env
$
lfg --set-provider gemini --method api --model gemini-pro
✅ Provider: gemini
✅ Model: gemini-pro

System AI

Uses system-provided AI tools or local models. No additional setup required.

System AI Setup
$
lfg --set-provider system --method system
✅ Provider: system
✅ Method: system

Model Selection Guide

Claude Models

claude-3-sonnet-20240229

Balanced performance and cost. Best for most development workflows.

claude-3-haiku-20240307

Fastest and most cost-effective. Good for simple commits.

claude-3-opus-20240229

Highest quality analysis. Best for complex, critical commits.

OpenAI Models

gpt-4

High-quality analysis with good reasoning capabilities.

gpt-4-turbo

Faster version of GPT-4 with similar quality.

gpt-3.5-turbo

Fast and cost-effective for simple commit messages.

Interactive Model Selection

Use the interactive model picker to explore available models for your provider:

Interactive Model Selection
$
lfg --pick-model
Enter model id (exact): claude-3-sonnet-20240229
✅ Model: claude-3-sonnet-20240229

AI Configuration Tips

Timeout Settings

Adjust AI timeout based on your network and model speed

ai.timeout: 45 for slower models or networks

Chunk Size Optimization

Larger chunks provide more context but may hit token limits

ai.chunk_size: 35000 for complex diffs, 20000 for simple changes

Fallback Configuration

Enable fallbacks for reliable operation when AI fails

ai.fallbacks.minimal_diff: true and ai.fallbacks.stats_mode: true

Validation Settings

Enable validation to ensure AI responses meet format requirements

ai.validate_on_change: true to catch malformed responses

Cost Optimization

Use faster, cheaper models for simple commits

Switch to claude-3-haiku or gpt-3.5-turbo for routine changes

Offline Fallback

Configure offline templates for when AI is unavailable

Customize ai.offline_template.subject and ai.offline_template.bullets

AI Troubleshooting

Problem: API key authentication failed

Solution: Check your API key in .lfg/.env and ensure it has proper permissions. For CLI methods, re-authenticate with the provider's CLI tool.

Problem: AI request timeout

Solution: Increase ai.timeout in config.yml or reduce ai.chunk_size for faster processing.

Problem: Token limit exceeded

Solution: Reduce ai.max_prompt_chars or ai.chunk_size. Enable ai.fallbacks.minimal_diff for large diffs.

Problem: Invalid AI response format

Solution: Enable ai.validate_on_change to catch format issues. Check AI provider status and model availability.

Problem: CLI tool not found

Solution: Install the provider's CLI tool (claude-cli, openai-cli) and ensure it's in your PATH.

Problem: Rate limiting

Solution: Switch to a different model tier or implement delays between requests. Consider using offline mode for high-frequency commits.