Performance Optimization: Managing Request Sizes and Batching
Understanding Request Sizes
Request size directly affects analysis speed and cost. Larger requests take longer to process and cost more (for cloud providers). Understanding how to manage request sizes helps optimize performance.
Size Management Strategies
Use STRICT Diff Scope
STRICT scope includes only changed lines, significantly reducing request size while maintaining analysis quality for most cases.
Selective Full Content
Only include full file content when necessary. Diffs alone are often sufficient and much smaller.
Limit Related Context
Configure related context caps appropriately. More context helps but increases size—find the right balance.
Intelligent Batching
The plugin automatically batches large commits, but you can optimize this:
- Keep commits focused (fewer files = faster analysis)
- Trust the batching system to handle large commits
- Review results from all batches
Provider Selection
For Speed
Cloud providers are generally faster than local options. Use them when speed is important.
For Cost
Local providers have no API costs. Use them when cost is a concern and speed is acceptable.
Model Selection
Standard Models
Use standard models (not Think Harder) for routine work. They're faster and sufficient for most cases.
Think Harder Selectively
Reserve Think Harder mode for important changes where the extra depth is valuable.
Conclusion
Performance optimization requires balancing analysis quality with speed and cost. By managing request sizes, using appropriate models, and trusting the batching system, you can get fast, efficient analysis.
Ready to optimize? Install AI Diff Review and configure settings for optimal performance.