AI Providers
Choose and configure AI providers for LazyPR
Overview
LazyPR uses AI providers to analyze your commits and generate professional PR descriptions. Choose the provider that best fits your needs based on speed, cost, and availability.
Provider Comparison
| Provider | Speed | Free Tier | Reliability | Model Options | Setup | Status |
|---|---|---|---|---|---|---|
| Groq (Default) | Fast (1-3s) | Generous | High | Multiple | Easy | ✅ Available |
| Cerebras | Ultra-fast (<1s) | Varies | High | Multiple | Easy | ✅ Available |
| OpenAI | Fast (2-4s) | Limited | High | GPT-5 Pro, Mini | Easy | 🚧 Soon |
| Anthropic (Claude) | Fast (2-4s) | Limited | High | Opus, Sonnet, Haiku | Easy | 🚧 Soon |
| Google AI (Gemini) | Fast (2-4s) | Generous | High | Gemini Pro, Flash | Easy | 🚧 Soon |
| Ollama | Variable | Free (Local) | High | Llama, Mistral, etc. | Medium | 🚧 Soon |
| LM Studio | Variable | Free (Local) | High | Various quantized | Medium | 🚧 Soon |
Want to see a specific provider added sooner? Let us know on GitHub!
Getting Started
Setting Up Groq (Default)
Get Your API Key
- Visit console.groq.com
- Sign up or log in to your account
- Navigate to API Keys section
- Create a new API key
Configure LazyPR
lazypr config set GROQ_API_KEY=gsk_your_api_key_hereGroq offers generous free tier limits, perfect for getting started.
Setting Up Cerebras
Get Your API Key
- Visit the Cerebras platform
- Create an account and generate an API key
Configure LazyPR
# Set the API key
lazypr config set CEREBRAS_API_KEY=your_cerebras_key_here
# Switch to Cerebras provider
lazypr config set PROVIDER=cerebrasSwitching Providers
Change between providers anytime:
lazypr config set PROVIDER=groqlazypr config set PROVIDER=cerebrasDon't forget to set the appropriate API key for your chosen provider (see Getting Started section above).
Model Selection
Configure which AI model to use:
lazypr config set MODEL=llama-3.3-70bDefault model: llama-3.3-70b
Different providers support different models. Check your provider's documentation for available options.
The default model provides an excellent balance of quality and speed for PR generation.
Token Usage Tracking
Monitor how many tokens your requests consume with the -u flag:
lazypr main -uOutput:
Generated PR Content
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[PR content here]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Token Usage: 347 tokensThis helps you:
- Track API costs
- Optimize commit message lengths
- Stay within rate limits
Performance Considerations
Response Time
Both providers typically respond in 1-3 seconds for standard PRs. Response time depends on:
- Number of commits
- Commit message length
- Provider load
- Network latency
Quality vs Speed
The default model (llama-3.3-70b) balances quality and speed. If you need faster responses and don't mind slightly shorter descriptions, you might experiment with smaller models (if your provider supports them).
Rate Limits
Each provider has rate limits:
- Free tiers: Typically sufficient for individual developers (dozens of PRs per day)
- Paid tiers: Higher limits for teams and heavy users
Check your provider's documentation for specific limits.
Cost Optimization
Use Commit Filtering
Enable filtering to reduce token usage by excluding irrelevant commits:
lazypr config set FILTER_COMMITS=true # DefaultLearn more in the Commit Filtering guide.
Write Concise Commit Messages
Shorter, clearer commit messages consume fewer tokens while maintaining quality:
Good (concise):
feat: add OAuth loginLess optimal (verbose):
feat: implemented a complete OAuth 2.0 authentication system with support for multiple providers including detailed error handling and loggingTrack Usage
Regularly use the -u flag to monitor consumption and adjust your usage patterns.
Troubleshooting
Invalid API Key Error: Double-check your key is correctly copied and hasn't expired. Generate a new key from your provider's console if needed.
Rate Limiting: Free tier accounts may have usage limits. Upgrade your plan or wait for the limit to reset.
Slow responses: Check your network connection. If persistent, try switching providers.
Model not found: Verify the model name is correct for your chosen provider. Reset to default with:
lazypr config set MODEL=llama-3.3-70b