lazypr

AI Providers

Choose and configure AI providers for LazyPR

Overview

LazyPR uses AI providers to analyze your commits and generate professional PR descriptions. Choose the provider that best fits your needs based on speed, cost, and availability.

Provider Comparison

ProviderSpeedFree TierReliabilityModel OptionsSetupStatus
Groq (Default)Fast (1-3s)GenerousHighMultipleEasy✅ Available
CerebrasUltra-fast (<1s)VariesHighMultipleEasy✅ Available
OpenAIFast (2-4s)LimitedHighGPT-5 Pro, MiniEasy🚧 Soon
Anthropic (Claude)Fast (2-4s)LimitedHighOpus, Sonnet, HaikuEasy🚧 Soon
Google AI (Gemini)Fast (2-4s)GenerousHighGemini Pro, FlashEasy🚧 Soon
OllamaVariableFree (Local)HighLlama, Mistral, etc.Medium🚧 Soon
LM StudioVariableFree (Local)HighVarious quantizedMedium🚧 Soon

Want to see a specific provider added sooner? Let us know on GitHub!

Getting Started

Setting Up Groq (Default)

Get Your API Key

  1. Visit console.groq.com
  2. Sign up or log in to your account
  3. Navigate to API Keys section
  4. Create a new API key

Configure LazyPR

lazypr config set GROQ_API_KEY=gsk_your_api_key_here

Groq offers generous free tier limits, perfect for getting started.

Setting Up Cerebras

Get Your API Key

  1. Visit the Cerebras platform
  2. Create an account and generate an API key

Configure LazyPR

# Set the API key
lazypr config set CEREBRAS_API_KEY=your_cerebras_key_here

# Switch to Cerebras provider
lazypr config set PROVIDER=cerebras

Switching Providers

Change between providers anytime:

lazypr config set PROVIDER=groq
lazypr config set PROVIDER=cerebras

Don't forget to set the appropriate API key for your chosen provider (see Getting Started section above).

Model Selection

Configure which AI model to use:

lazypr config set MODEL=llama-3.3-70b

Default model: llama-3.3-70b

Different providers support different models. Check your provider's documentation for available options.

The default model provides an excellent balance of quality and speed for PR generation.

Token Usage Tracking

Monitor how many tokens your requests consume with the -u flag:

lazypr main -u

Output:

Generated PR Content
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

[PR content here]

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Token Usage: 347 tokens

This helps you:

  • Track API costs
  • Optimize commit message lengths
  • Stay within rate limits

Performance Considerations

Response Time

Both providers typically respond in 1-3 seconds for standard PRs. Response time depends on:

  • Number of commits
  • Commit message length
  • Provider load
  • Network latency

Quality vs Speed

The default model (llama-3.3-70b) balances quality and speed. If you need faster responses and don't mind slightly shorter descriptions, you might experiment with smaller models (if your provider supports them).

Rate Limits

Each provider has rate limits:

  • Free tiers: Typically sufficient for individual developers (dozens of PRs per day)
  • Paid tiers: Higher limits for teams and heavy users

Check your provider's documentation for specific limits.

Cost Optimization

Use Commit Filtering

Enable filtering to reduce token usage by excluding irrelevant commits:

lazypr config set FILTER_COMMITS=true  # Default

Learn more in the Commit Filtering guide.

Write Concise Commit Messages

Shorter, clearer commit messages consume fewer tokens while maintaining quality:

Good (concise):

feat: add OAuth login

Less optimal (verbose):

feat: implemented a complete OAuth 2.0 authentication system with support for multiple providers including detailed error handling and logging

Track Usage

Regularly use the -u flag to monitor consumption and adjust your usage patterns.

Troubleshooting

Invalid API Key Error: Double-check your key is correctly copied and hasn't expired. Generate a new key from your provider's console if needed.

Rate Limiting: Free tier accounts may have usage limits. Upgrade your plan or wait for the limit to reset.

Slow responses: Check your network connection. If persistent, try switching providers.

Model not found: Verify the model name is correct for your chosen provider. Reset to default with:

lazypr config set MODEL=llama-3.3-70b

On this page