4.7 KiB
4.7 KiB
description, allowed-tools, argument-hint
| description | allowed-tools | argument-hint | |
|---|---|---|---|
| Optimize code for performance - identify bottlenecks and suggest improvements | Read(*), Grep(*), Glob(*), Bash(*) |
|
Optimize Command
Analyze and optimize code for better performance.
Technology Adaptation
Configuration Source: CLAUDE.md
Consult CLAUDE.md for:
- Performance Tools: (Profilers, benchmarking tools)
- Performance Targets: Expected response times, throughput
- Infrastructure: Deployment constraints affecting performance
Instructions
-
Identify Target
- If $ARGUMENTS provided: Focus on that file/function
- Otherwise: Ask user what needs optimization
-
Analyze Performance
- Read CLAUDE.md for performance requirements
- Identify performance bottlenecks:
- Inefficient algorithms (O(n²) vs O(n))
- Unnecessary computations
- Database N+1 queries
- Missing indexes
- Excessive memory allocation
- Blocking operations
- Large file/data processing
-
Propose Optimizations
- Suggest algorithmic improvements
- Recommend caching strategies
- Propose database query optimization
- Suggest async/parallel processing
- Recommend lazy loading
- Propose memoization for expensive calculations
-
Provide Implementation
- Show before/after code comparison
- Estimate performance improvement
- Note any trade-offs (memory vs speed, complexity vs performance)
- Ensure changes maintain correctness
- Add performance tests if possible
Common Optimization Patterns
Algorithm Optimization
- Replace nested loops with hash maps (O(n²) → O(n))
- Use binary search instead of linear search (O(n) → O(log n))
- Apply dynamic programming for recursive problems
- Use efficient data structures (sets vs arrays for lookups)
Database Optimization
- Add indexes for frequent queries
- Use eager loading to prevent N+1 queries
- Implement pagination for large datasets
- Use database-level aggregations
- Cache query results
Resource Management
- Implement connection pooling
- Use lazy loading for large objects
- Stream data instead of loading entirely
- Release resources promptly
- Use async operations for I/O
MCP Server Usage
Serena MCP
Code Navigation:
find_symbol- Locate performance-critical code sectionsfind_referencing_symbols- Understand where slow code is calledget_symbols_overview- Identify hot paths and complexitysearch_for_pattern- Find inefficient patterns across codebase
Persistent Memory (stored in .serena/memories/):
- Use
write_memoryto store optimization findings:- "optimization-algorithm-[function-name]"
- "optimization-database-[query-type]"
- "lesson-performance-[component]"
- "pattern-bottleneck-[issue-type]"
- Use
read_memoryto recall past performance issues and solutions - Use
list_memoriesto review optimization history
Memory MCP (Knowledge Graph)
Temporary Context (in-memory, cleared after session):
- Use
create_entitiesfor bottlenecks being analyzed - Use
create_relationsto map performance dependencies - Use
add_observationsto document performance metrics
Note: After optimization, store successful strategies in Serena memory.
Context7 MCP
- Use
get-library-docsfor framework-specific performance best practices
Other MCP Servers
- sequential-thinking: For complex optimization reasoning
Output Format
## Performance Optimization Report
### Target: [File/Function]
### Current Performance
- **Complexity**: [Big O notation]
- **Estimated Time**: [for typical inputs]
- **Bottlenecks**: [Identified issues]
### Proposed Optimizations
#### Optimization 1: [Name]
**Type**: [Algorithm/Database/Caching/etc.]
**Impact**: [High/Medium/Low]
**Effort**: [High/Medium/Low]
**Current Code**:
```[language]
[current implementation]
Optimized Code:
[optimized implementation]
Expected Improvement: [e.g., "50% faster", "O(n) instead of O(n²)"] Trade-offs: [Any downsides or considerations]
Optimization 2: [Name]
[...]
Performance Comparison
| Metric | Before | After | Improvement |
|---|---|---|---|
| Time Complexity | [O(...)] | [O(...)] | [%] |
| Space Complexity | [O(...)] | [O(...)] | [%] |
| Typical Runtime | [ms] | [ms] | [%] |
Recommendations
- [Priority 1]: Implement [optimization] - [reason]
- [Priority 2]: Consider [optimization] - [reason]
- [Priority 3]: Monitor [metric] - [reason]
Testing Strategy
- Benchmark with typical data sizes
- Profile before and after
- Test edge cases (empty, large inputs)
- Verify correctness maintained
Next Steps
- Implement optimization
- Add performance tests
- Benchmark results
- Update documentation