Overview
AI-powered features in VIJ include:Error Summaries
Concise, human-readable explanations of what went wrong
Root Cause Analysis
Intelligent analysis of why the error occurred
Fix Suggestions
Actionable recommendations to resolve the issue
Pattern Detection
Identify recurring patterns across error groups
Getting Started
1
Get a Gemini API key
- Visit Google AI Studio
- Sign in with your Google account
- Click Get API Key
- Create a new API key
- Copy the key (starts with
AIza...)
2
Configure VIJ Admin
Add the API key to your
.env.local file:.env.local
No code changes required. VIJ automatically detects the API key and enables AI features.
3
Restart the application
AI features are now enabled! Click any error in the dashboard to see AI analysis.
4
Verify AI is working
- Navigate to any error detail page
- Look for the “AI Analysis” section
- You should see:
- AI Summary
- Possible Cause
- Suggested Fix
AI analysis may take 2-5 seconds on first load. Results are cached for performance.
AI Features in Detail
AI Summary
A concise explanation of the error in plain language. Example:- VIJ sends error message, stack trace, and context to Gemini
- Gemini analyzes the error
- Returns a human-readable summary
- Summary is cached in MongoDB for 7 days
Possible Cause
Root cause analysis explaining why the error occurred. Example:- Technical reasons for the error
- Common scenarios that trigger this error
- Dependencies or external factors
- Code-level explanations
Suggested Fix
Actionable recommendations to resolve the error. Example:- Code examples
- Best practices
- Error handling improvements
- Validation techniques
AI Analysis Workflow
Caching Strategy
AI analysis is cached to reduce API calls and improve performance:- Cache duration: 7 days
- Cache key: Error fingerprint + Gemini model version
- Cache invalidation: Manual or on model update
- Faster load times for repeated errors
- Reduced API costs
- Offline access to previous analysis
Customizing AI Prompts
Customize how VIJ queries Gemini for better results.Custom Prompt Template
Editlib/gemini.ts to customize prompts:
lib/gemini.ts
Domain-Specific Prompts
Add custom context for your application:Few-Shot Learning
Provide examples to improve AI responses:AI Model Configuration
Model Selection
VIJ uses Gemini 1.5 Flash by default. You can configure different models:lib/gemini.ts
| Model | Speed | Quality | Cost | Best For |
|---|---|---|---|---|
| Gemini 1.5 Flash | Very Fast | Good | Low | High-volume analysis |
| Gemini 1.5 Pro | Fast | Excellent | Medium | Complex errors |
| Gemini 1.0 Pro | Medium | Good | Low | Simple errors |
Model Parameters
Fine-tune generation parameters:- Temperature: Higher = more creative, lower = more focused
- Use
0.3-0.5for technical analysis - Use
0.7-0.9for suggestions
- Use
- maxOutputTokens: Control response length
- Use
1024for summaries - Use
2048for detailed analysis
- Use
Advanced AI Features
Context-Aware Analysis
Include relevant context for better analysis:Multi-Step Analysis
Perform deeper analysis with multiple AI calls:Batch Analysis
Analyze multiple errors together for pattern detection:API Usage and Costs
Rate Limits
Google Gemini free tier limits:- Requests per minute: 60
- Requests per day: 1,500
- Tokens per minute: 32,000
Cost Estimation
Gemini 1.5 Flash (free tier):- Input: Free up to 1M tokens/day
- Output: Free up to 1M tokens/day
- Per error analysis: ~1,500 tokens
- Daily budget: ~666 error analyses
- Monthly cost: $0 (within free tier)
- Input: $0.075 per 1M tokens
- Output: $0.30 per 1M tokens
- 10,000 errors/month: ~$5-10
Monitoring Usage
Track API usage in VIJ Admin:Error Handling
Handle AI failures gracefully:Privacy and Security
Data Privacy
VIJ sends only error information to Gemini:- Error name, message, stack trace
- Application metadata (appId, environment)
- User-provided metadata
- User PII (unless in metadata)
- Authentication tokens
- Database contents
- Source code (unless in stack trace)
Opt-Out
Disable AI features without affecting core functionality:Data Retention
Control how long AI responses are cached:Troubleshooting
AI analysis not appearing
AI analysis not appearing
Check:
GEMINI_API_KEYis set in.env.local- API key is valid (starts with
AIza) - Restart dev server after adding key
- Check browser console for errors
- Verify Gemini API quota
Rate limit errors
Rate limit errors
Error:
429 Too Many RequestsSolutions:- Increase cache duration to reduce API calls
- Upgrade to paid tier for higher limits
- Implement request queuing
- Only analyze high-priority errors
Poor quality responses
Poor quality responses
Issue: AI suggestions are not helpfulImprovements:
- Add more context to prompts
- Use Gemini Pro instead of Flash
- Include code snippets in context
- Provide domain-specific information
- Use few-shot examples
Slow AI responses
Slow AI responses
Issue: Analysis takes too longSolutions:
- Use Gemini Flash (fastest model)
- Reduce maxOutputTokens
- Enable caching
- Pre-generate analysis for common errors
Best Practices
Cache AI responses aggressively
Cache AI responses aggressively
Provide rich context
Provide rich context
Monitor and optimize costs
Monitor and optimize costs
- Track API usage daily
- Set up billing alerts
- Cache common errors
- Use selective analysis
Validate AI responses
Validate AI responses
- Don’t blindly trust suggestions
- Review code examples
- Test suggested fixes
- Have humans verify critical fixes