Files
FoundryVTT/.claude/commands/optimize.md
2025-11-06 14:04:48 +01:00

4.7 KiB

description, allowed-tools, argument-hint
description allowed-tools argument-hint
Optimize code for performance - identify bottlenecks and suggest improvements Read(*), Grep(*), Glob(*), Bash(*)
file-or-function

Optimize Command

Analyze and optimize code for better performance.

Technology Adaptation

Configuration Source: CLAUDE.md

Consult CLAUDE.md for:

  • Performance Tools: (Profilers, benchmarking tools)
  • Performance Targets: Expected response times, throughput
  • Infrastructure: Deployment constraints affecting performance

Instructions

  1. Identify Target

    • If $ARGUMENTS provided: Focus on that file/function
    • Otherwise: Ask user what needs optimization
  2. Analyze Performance

    • Read CLAUDE.md for performance requirements
    • Identify performance bottlenecks:
      • Inefficient algorithms (O(n²) vs O(n))
      • Unnecessary computations
      • Database N+1 queries
      • Missing indexes
      • Excessive memory allocation
      • Blocking operations
      • Large file/data processing
  3. Propose Optimizations

    • Suggest algorithmic improvements
    • Recommend caching strategies
    • Propose database query optimization
    • Suggest async/parallel processing
    • Recommend lazy loading
    • Propose memoization for expensive calculations
  4. Provide Implementation

    • Show before/after code comparison
    • Estimate performance improvement
    • Note any trade-offs (memory vs speed, complexity vs performance)
    • Ensure changes maintain correctness
    • Add performance tests if possible

Common Optimization Patterns

Algorithm Optimization

  • Replace nested loops with hash maps (O(n²) → O(n))
  • Use binary search instead of linear search (O(n) → O(log n))
  • Apply dynamic programming for recursive problems
  • Use efficient data structures (sets vs arrays for lookups)

Database Optimization

  • Add indexes for frequent queries
  • Use eager loading to prevent N+1 queries
  • Implement pagination for large datasets
  • Use database-level aggregations
  • Cache query results

Resource Management

  • Implement connection pooling
  • Use lazy loading for large objects
  • Stream data instead of loading entirely
  • Release resources promptly
  • Use async operations for I/O

MCP Server Usage

Serena MCP

Code Navigation:

  • find_symbol - Locate performance-critical code sections
  • find_referencing_symbols - Understand where slow code is called
  • get_symbols_overview - Identify hot paths and complexity
  • search_for_pattern - Find inefficient patterns across codebase

Persistent Memory (stored in .serena/memories/):

  • Use write_memory to store optimization findings:
    • "optimization-algorithm-[function-name]"
    • "optimization-database-[query-type]"
    • "lesson-performance-[component]"
    • "pattern-bottleneck-[issue-type]"
  • Use read_memory to recall past performance issues and solutions
  • Use list_memories to review optimization history

Memory MCP (Knowledge Graph)

Temporary Context (in-memory, cleared after session):

  • Use create_entities for bottlenecks being analyzed
  • Use create_relations to map performance dependencies
  • Use add_observations to document performance metrics

Note: After optimization, store successful strategies in Serena memory.

Context7 MCP

  • Use get-library-docs for framework-specific performance best practices

Other MCP Servers

  • sequential-thinking: For complex optimization reasoning

Output Format

## Performance Optimization Report

### Target: [File/Function]

### Current Performance
- **Complexity**: [Big O notation]
- **Estimated Time**: [for typical inputs]
- **Bottlenecks**: [Identified issues]

### Proposed Optimizations

#### Optimization 1: [Name]
**Type**: [Algorithm/Database/Caching/etc.]
**Impact**: [High/Medium/Low]
**Effort**: [High/Medium/Low]

**Current Code**:
```[language]
[current implementation]

Optimized Code:

[optimized implementation]

Expected Improvement: [e.g., "50% faster", "O(n) instead of O(n²)"] Trade-offs: [Any downsides or considerations]

Optimization 2: [Name]

[...]

Performance Comparison

Metric Before After Improvement
Time Complexity [O(...)] [O(...)] [%]
Space Complexity [O(...)] [O(...)] [%]
Typical Runtime [ms] [ms] [%]

Recommendations

  1. [Priority 1]: Implement [optimization] - [reason]
  2. [Priority 2]: Consider [optimization] - [reason]
  3. [Priority 3]: Monitor [metric] - [reason]

Testing Strategy

  • Benchmark with typical data sizes
  • Profile before and after
  • Test edge cases (empty, large inputs)
  • Verify correctness maintained

Next Steps

  • Implement optimization
  • Add performance tests
  • Benchmark results
  • Update documentation