Initial commit: Fresh start with current state

This commit is contained in:
Claude Code
2025-11-06 14:04:48 +01:00
commit 15355c35ea
20152 changed files with 1191077 additions and 0 deletions

View File

@@ -0,0 +1,540 @@
---
description: Brief description of what this command does (shown in /help)
allowed-tools: Bash(git status:*), Bash(git add:*), Read(*)
# Optional: Explicitly declare which tools this command needs
# Format: Tool(pattern:*) or Tool(*)
# Examples:
# - Bash(git *:*) - Allow all git commands
# - Read(*), Glob(*) - Allow file reading and searching
# - Write(*), Edit(*) - Allow file modifications
argument-hint: [optional-parameter]
# Optional: Hint text showing expected arguments
# Appears in command autocompletion and help
# Examples: [file-path], [branch-name], [message]
disable-model-invocation: false
# Optional: Set to true if this is a simple prompt that doesn't need AI processing
# Use for commands that just display text or run simple scripts
---
# Command Template
Clear, concise instructions for Claude on what to do when this command is invoked.
## Technology Adaptation
**IMPORTANT**: This command adapts to the project's technology stack.
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
Before executing, consult CLAUDE.md for:
- **Package Manager**: (npm, NuGet, pip, cargo, Maven) - Use correct install/test/build commands
- **Build Tool**: (dotnet, npm scripts, make, cargo) - Use correct build commands
- **Test Framework**: (xUnit, Jest, pytest, JUnit) - Use correct test commands
- **Language**: (C#, TypeScript, Python, Rust, Java) - Follow syntax conventions
- **Project Structure**: Navigate to correct paths for src, tests, config
## MCP Server Integration
**Available MCP Servers**: Leverage configured MCP servers for enhanced capabilities.
### Serena MCP
**Code Tools**: `find_symbol`, `find_referencing_symbols`, `get_symbols_overview`, `search_for_pattern`, `rename_symbol`
**Persistent Memory** (stored in `.serena/memories/`):
- `write_memory` - Store command findings, patterns, decisions
- `read_memory` - Recall past information
- `list_memories` - Browse all memories
Use for command-specific persistent knowledge.
### Memory MCP
**Temporary Context** (in-memory, cleared after session):
- `create_entities` - Track entities during command execution
- `create_relations` - Model relationships
- `add_observations` - Add details
Use for temporary command state.
### Other MCP Servers
- **context7**: Library documentation
- **fetch**: Web content
- **playwright**: Browser automation
- **windows-mcp**: Windows automation
## Arguments
If your command accepts arguments, explain how to use them:
**$ARGUMENTS** - All arguments passed to the command as a single string
**$1** - First positional argument
**$2** - Second positional argument
**$3** - Third positional argument, etc.
### Example Usage:
```
/command-name argument1 argument2 argument3
```
In the command file:
- `$ARGUMENTS` would be: "argument1 argument2 argument3"
- `$1` would be: "argument1"
- `$2` would be: "argument2"
- `$3` would be: "argument3"
## Instructions
Provide clear, step-by-step instructions for Claude:
1. **First step**: What to do first
- Specific action or tool to use
- Expected behavior
2. **Second step**: Next action
- How to process information
- What to check or validate
3. **Final step**: Output or result
- Format for results
- What to return to the user
## Expected Output
Describe the format and structure of the command's output:
```markdown
## Section Title
- Item 1
- Item 2
**Summary**: Key findings...
```
Or for code output:
```language
// Expected code format
output_example();
```
## Advanced Features
### Bash Execution
Execute shell commands directly:
```markdown
!ls -la
!git status
!npm test
```
Prefix commands with `!` to run them immediately.
### File References
Include file contents in the prompt:
```markdown
Review this file: @path/to/file.js
Compare these files: @file1.py @file2.py
```
Use `@` prefix to automatically include file contents.
### Conditional Logic
Add conditional instructions:
```markdown
If $1 is "quick":
- Run fast analysis only
- Skip detailed checks
If $1 is "full":
- Run comprehensive analysis
- Include all checks
- Generate detailed report
```
## Examples
### Example 1: Basic Usage
```
/command-name
```
**Expected behavior**: Description of what happens
### Example 2: With Arguments
```
/command-name src/app.js detailed
```
**Expected behavior**: How arguments affect behavior
### Example 3: Advanced Usage
```
/command-name @file.js --option
```
**Expected behavior**: Combined features
## Best Practices
### When to Use This Command
- Scenario 1: When you need...
- Scenario 2: For tasks involving...
- Scenario 3: To quickly...
### Common Patterns
**Read then analyze:**
```markdown
1. Read the files in $ARGUMENTS
2. Analyze for [specific criteria]
3. Provide structured feedback
```
**Execute then report:**
```markdown
1. Run: !command $1
2. Parse the output
3. Summarize results
```
**Generate then write:**
```markdown
1. Generate [content type] based on $1
2. Write to specified location
3. Confirm completion
```
## Error Handling
Handle common issues gracefully:
```markdown
If no arguments provided:
- Show usage example
- Ask user for required input
If file not found:
- List similar files
- Suggest corrections
If operation fails:
- Explain the error
- Suggest next steps
```
## Integration with Other Features
### Works well with:
- **Hooks**: Can trigger pre/post command hooks
- **Subagents**: Can invoke specialized agents
- **Skills**: Commands can activate relevant skills
- **MCP Servers**: Can use MCP tools and resources
### Combine with tools:
```markdown
Use Read tool to load $1
Use Grep to search for $2
Use Edit to update findings
```
## Notes and Tips
- Keep commands focused on a single task
- Use descriptive names (verb-noun pattern)
- Document all arguments clearly
- Provide helpful examples
- Consider error cases
- Test with team members
---
## Template Usage Guidelines
### Naming Commands
**Good names** (verb-noun pattern):
- `/review-pr` - Review pull request
- `/generate-tests` - Generate unit tests
- `/check-types` - Check TypeScript types
- `/update-deps` - Update dependencies
**Poor names**:
- `/fix` - Too vague
- `/test` - Conflicts with built-in
- `/help` - Reserved
- `/my-command-that-does-many-things` - Too long
### Writing Descriptions
The `description` field appears in `/help` output:
**Good descriptions**:
```yaml
description: Review pull request for code quality and best practices
description: Generate unit tests for specified file or module
description: Update package dependencies and check for breaking changes
```
**Poor descriptions**:
```yaml
description: Does stuff
description: Helper command
description: Command
```
### Choosing Tool Permissions
Explicitly declare tools your command needs:
```yaml
# Read-only command
allowed-tools: Read(*), Grep(*), Glob(*)
# Git operations
allowed-tools: Bash(git status:*), Bash(git diff:*), Bash(git log:*)
# File modifications
allowed-tools: Read(*), Edit(*), Write(*), Bash(npm test:*)
# Comprehensive access
allowed-tools: Bash(*), Read(*), Write(*), Edit(*), Grep(*), Glob(*)
```
**Pattern syntax**:
- `Tool(*)` - All operations
- `Tool(pattern:*)` - Specific operation (e.g., `Bash(git status:*)`)
- `Tool(*pattern*)` - Contains pattern
### Using Arguments Effectively
**Simple single argument**:
```yaml
argument-hint: [file-path]
```
```markdown
Analyze the file: $ARGUMENTS
```
**Multiple arguments**:
```yaml
argument-hint: [source] [target]
```
```markdown
Compare $1 to $2 and identify differences.
```
**Optional arguments**:
```markdown
If $1 is provided:
- Use $1 as the target
Otherwise:
- Use current directory
```
### Command Categories
Organize related commands in subdirectories:
```
.claude/commands/
├── COMMANDS_TEMPLATE.md
├── git/
│ ├── commit.md
│ ├── review-pr.md
│ └── sync.md
├── testing/
│ ├── run-tests.md
│ ├── generate-tests.md
│ └── coverage.md
└── docs/
├── generate-readme.md
└── update-api-docs.md
```
Invoke with: `/git/commit`, `/testing/run-tests`, etc.
### Combining with Bash Execution
Execute shell commands inline:
```markdown
First, check the current branch:
!git branch --show-current
Then check for uncommitted changes:
!git status --short
If there are changes, show them:
!git diff
```
### Combining with File References
Include file contents automatically:
```markdown
Review the implementation in @$1 and suggest improvements.
Compare the old version @$1 with the new version @$2.
Analyze these related files:
@src/main.js
@src/utils.js
@tests/main.test.js
```
### Model Selection Strategy
Choose the right model for the task:
```yaml
# For complex reasoning and code generation (default)
model: claude-3-5-sonnet-20241022
# For fast, simple tasks (commit messages, formatting)
model: claude-3-5-haiku-20241022
# For most complex tasks (architecture, security reviews)
model: claude-opus-4-20250514
```
### Disabling Model Invocation
For simple text commands that don't need AI:
```yaml
---
description: Show project documentation
disable-model-invocation: true
---
# Project Documentation
Visit: https://github.com/org/repo
## Quick Links
- [Setup Guide](docs/setup.md)
- [API Reference](docs/api.md)
- [Contributing](CONTRIBUTING.md)
```
## Command vs Skill: When to Use What
### Use a **Command** when:
- ✅ User needs to explicitly trigger the action
- ✅ It's a specific workflow or routine task
- ✅ You want predictable, on-demand behavior
- ✅ Examples: `/review-pr`, `/generate-tests`, `/commit`
### Use a **Skill** when:
- ✅ Claude should automatically use it when relevant
- ✅ It's specialized knowledge or expertise
- ✅ You want Claude to discover it based on context
- ✅ Examples: PDF processing, Excel analysis, specific frameworks
### Can be **Both**:
Create a command that explicitly invokes a skill:
```markdown
---
description: Perform comprehensive code review
---
Activate the Code Review skill and analyze $ARGUMENTS for:
- Code quality
- Best practices
- Security issues
- Performance opportunities
```
---
## Testing Your Command
### 1. Basic Functionality
```
/command-name
```
Verify it executes without errors.
### 2. With Arguments
```
/command-name arg1 arg2
```
Check argument handling works correctly.
### 3. Edge Cases
```
/command-name
/command-name ""
/command-name with many arguments here
```
### 4. Tool Permissions
Verify declared tools work without extra permission prompts.
### 5. Team Testing
Have colleagues try the command and provide feedback.
---
## Quick Reference Card
| Element | Purpose | Required |
|---------|---------|----------|
| `description` | Shows in /help | ✅ Recommended |
| `allowed-tools` | Pre-approve tools | ❌ Optional |
| `argument-hint` | Show expected args | ❌ Optional |
| `model` | Specify model | ❌ Optional |
| `disable-model-invocation` | Skip AI for static text | ❌ Optional |
| Instructions | What to do | ✅ Yes |
| Examples | Usage demos | ✅ Recommended |
| Error handling | Handle failures | ✅ Recommended |
---
## Common Command Patterns
### 1. Read-Analyze-Report
```markdown
1. Read files specified in $ARGUMENTS
2. Analyze for [criteria]
3. Generate structured report
```
### 2. Execute-Parse-Summarize
```markdown
1. Run: !command $1
2. Parse the output
3. Summarize findings
```
### 3. Generate-Validate-Write
```markdown
1. Generate [content] based on $1
2. Validate against [rules]
3. Write to $2 or default location
```
### 4. Compare-Diff-Suggest
```markdown
1. Load both $1 and $2
2. Compare differences
3. Suggest improvements or migrations
```
### 5. Check-Fix-Verify
```markdown
1. Check for [issues] in $ARGUMENTS
2. Apply fixes automatically
3. Verify corrections worked
```
---
**Pro tip**: Start with simple commands and gradually add complexity. Test each feature before adding the next. Share with your team early for feedback.

234
.claude/commands/adr.md Normal file
View File

@@ -0,0 +1,234 @@
---
description: Manage Architectural Decision Records (ADRs) - list, view, create, or update architectural decisions
allowed-tools: Read(*), mcp__serena__list_memories(*), mcp__serena__read_memory(*), mcp__serena__write_memory(*), Task(*)
argument-hint: [list|view|create|update] [adr-number-or-name]
---
# ADR Command
Manage Architectural Decision Records (ADRs) to document important architectural and technical decisions.
## What are ADRs?
Architectural Decision Records (ADRs) are documents that capture important architectural decisions along with their context and consequences. They help teams:
- Understand **why** decisions were made
- Maintain **consistency** across the codebase
- Onboard new team members faster
- Avoid repeating past mistakes
- Track the evolution of system architecture
## Usage
```bash
# List all ADRs
/adr list
# View a specific ADR
/adr view adr-001
/adr view adr-003-authentication-strategy
# Create a new ADR
/adr create
# Update/supersede an existing ADR
/adr update adr-002
```
## Instructions
### When user runs: `/adr list`
1. Use Serena MCP `list_memories` to find all memories starting with "adr-"
2. Parse and display them in a formatted table:
```markdown
## Architectural Decision Records
| ADR # | Status | Title | Date |
|-------|--------|-------|------|
| 001 | Accepted | Microservices Architecture | 2024-10-15 |
| 002 | Accepted | PostgreSQL Database Choice | 2024-10-18 |
| 003 | Proposed | Event-Driven Communication | 2025-10-20 |
| 004 | Deprecated | MongoDB (superseded by ADR-002) | 2024-10-10 |
**Total ADRs**: 4
**Active**: 2 | **Proposed**: 1 | **Deprecated/Superseded**: 1
```
3. Provide summary statistics
4. Offer to view specific ADRs or create new ones
### When user runs: `/adr view [number-or-name]`
1. Use Serena MCP `read_memory` with the specified ADR name
2. Display the full ADR content
3. Highlight key sections:
- Decision outcome
- Related ADRs
- Status
4. Check for related ADRs and offer to view them
5. If ADR is deprecated/superseded, show which ADR replaced it
### When user runs: `/adr create`
1. **Check existing ADRs** to determine next number:
- Use `list_memories` to find all "adr-*"
- Find highest number and increment
- If no ADRs exist, start with 001
2. **Ask clarifying questions**:
- What architectural decision needs to be made?
- What problem does this solve?
- What are the constraints?
- What options have you considered?
3. **Invoke architect agent** with Task tool:
- Pass the user's requirements
- Agent will create structured ADR
- Agent will ask to save to Serena memory
4. **Confirm ADR creation**:
- Show ADR number assigned
- Confirm it's saved to Serena memory
- Suggest reviewing related ADRs
### When user runs: `/adr update [number]`
1. **Load existing ADR**:
- Use `read_memory` to load the specified ADR
- Display current content
2. **Determine update type**:
- Ask: "What type of update?"
- Supersede (replace with new decision)
- Deprecate (mark as no longer valid)
- Amend (update details without changing decision)
3. **For Supersede**:
- Create new ADR using `/adr create` process
- Mark old ADR as "Superseded by ADR-XXX"
- Update old ADR in memory
- Link new ADR to old one
4. **For Deprecate**:
- Update status to "Deprecated"
- Add deprecation reason and date
- Save updated ADR
5. **For Amend**:
- Invoke architect agent to help with amendments
- Maintain version history
- Save updated ADR
## ADR Format
All ADRs should follow this standard format (see architect agent for full template):
```markdown
# ADR-XXX: [Decision Title]
**Status**: [Proposed | Accepted | Deprecated | Superseded by ADR-XXX]
**Date**: [YYYY-MM-DD]
## Context and Problem Statement
[What problem requires a decision?]
## Decision Drivers
- [Key factors influencing the decision]
## Considered Options
### Option 1: [Name]
- Pros: [...]
- Cons: [...]
## Decision Outcome
**Chosen option**: [Option X] because [justification]
## Consequences
- Positive: [...]
- Negative: [...]
[Additional sections as needed]
```
## Best Practices
### When to Create an ADR
Create an ADR when making decisions about:
- **Architecture**: System structure, component boundaries, communication patterns
- **Technology**: Language, framework, database, or major library choices
- **Security**: Authentication, authorization, encryption approaches
- **Infrastructure**: Deployment, hosting, scaling strategies
- **Standards**: Coding standards, testing approaches, monitoring strategies
### When NOT to Create an ADR
Don't create ADRs for:
- Trivial decisions that don't impact architecture
- Decisions easily reversible without significant cost
- Implementation details within a single component
- Personal preferences without architectural impact
### ADR Lifecycle
1. **Proposed**: Decision is being considered
2. **Accepted**: Decision is approved and should be followed
3. **Deprecated**: No longer recommended but may still exist in code
4. **Superseded**: Replaced by a newer ADR
### Integration with Other Agents
- **architect**: Creates and updates ADRs
- **code-reviewer**: Validates code aligns with ADRs
- **security-analyst**: Ensures security ADRs are followed
- **project-manager**: Loads ADRs to inform workflow planning
## Examples
### Example 1: Team wants to understand past decisions
```bash
User: /adr list
```
Agent lists all ADRs with status, allowing team to understand architectural history.
### Example 2: Reviewing specific decision
```bash
User: /adr view adr-003-authentication-strategy
```
Agent shows full ADR with rationale, alternatives, and consequences.
### Example 3: Making new architectural decision
```bash
User: /adr create
Agent: What architectural decision needs to be made?
User: We need to choose between REST and GraphQL for our API
Agent: [Asks clarifying questions, invokes architect agent]
Agent: Created ADR-005-api-architecture-graphql. Saved to Serena memory.
```
### Example 4: Superseding old decision
```bash
User: /adr update adr-002
Agent: [Shows current ADR-002: MongoDB choice]
Agent: What type of update? (supersede/deprecate/amend)
User: supersede - we're moving to PostgreSQL
Agent: [Creates ADR-006, marks ADR-002 as superseded]
```
## Notes
- ADRs are stored in Serena memory (`.serena/memories/`)
- ADRs persist across sessions
- All agents can read ADRs to inform their work
- Architect agent is responsible for creating properly formatted ADRs
- Use sequential numbering (001, 002, 003, etc.)
- Keep ADRs concise but comprehensive
- Update ADRs rather than deleting them (maintain history)

163
.claude/commands/analyze.md Normal file
View File

@@ -0,0 +1,163 @@
---
description: Perform comprehensive code analysis including complexity, dependencies, and quality metrics
allowed-tools: Read(*), Grep(*), Glob(*), Bash(*)
argument-hint: [path]
---
# Analyze Command
Perform comprehensive code analysis on the specified path or current directory.
## Technology Adaptation
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
Consult CLAUDE.md for:
- **Analysis Tools**: (SonarQube, ESLint, Pylint, Roslyn Analyzers, etc.)
- **Quality Metrics**: Project-specific thresholds
- **Package Manager**: For dependency analysis
## Instructions
1. **Determine Scope**
- If $ARGUMENTS provided: Analyze that specific path
- Otherwise: Analyze entire project
2. **Load Previous Analysis Lessons** ⚠️ **IMPORTANT**
- Use Serena MCP `list_memories` to see past analysis results
- Use `read_memory` to load relevant findings:
- `"analysis-*"` - Previous analysis reports
- `"lesson-analysis-*"` - Past analysis insights
- `"pattern-*"` - Known patterns in the codebase
- Compare current state with past analysis to identify trends
- Apply lessons learned from previous analyses
3. **Gather Context**
- Read CLAUDE.md for project structure and quality standards
- Identify primary language(s) from CLAUDE.md
- Use serena MCP to get codebase overview
4. **Perform Analysis**
- **Code Complexity**: Identify complex functions/classes
- **Dependencies**: Check for outdated or vulnerable packages
- **Code Duplication**: Find repeated code patterns
- **Test Coverage**: Assess test coverage (if tests exist)
- **Code Style**: Check against CLAUDE.md standards
- **Documentation**: Assess documentation completeness
- **Compare with past analysis** to identify improvements or regressions
4. **Generate Report**
- Summarize findings by category
- Highlight top issues to address
- Provide actionable recommendations
- Reference CLAUDE.md standards
## MCP Server Usage
### Serena MCP
**Code Navigation**:
- `get_symbols_overview` - Analyze file structure and complexity
- `find_symbol` - Locate specific components for detailed analysis
- `find_referencing_symbols` - Understand dependencies and coupling
- `search_for_pattern` - Find code duplication and patterns
**Persistent Memory** (stored in `.serena/memories/`):
- Use `write_memory` to store analysis results:
- "analysis-code-complexity-[date]"
- "analysis-dependencies-[date]"
- "analysis-technical-debt-[date]"
- "pattern-complexity-hotspots"
- Use `read_memory` to compare with past analyses and track trends
- Use `list_memories` to view analysis history
### Memory MCP (Knowledge Graph)
**Temporary Context** (in-memory, cleared after session):
- Use `create_entities` for components being analyzed
- Use `create_relations` to map dependencies and relationships
- Use `add_observations` to document findings and metrics
**Note**: After analysis completes, store summary in Serena memory.
### Context7 MCP
- Use `get-library-docs` for best practices and quality standards for the tech stack
## Output Format
```markdown
## Analysis Report
### Project: [Name]
**Analyzed**: [Path]
**Date**: [Current date]
### Summary
- **Total Files**: [count]
- **Languages**: [from CLAUDE.md]
- **Lines of Code**: [estimate]
### Quality Metrics
- **Code Complexity**: [High/Medium/Low]
- **Test Coverage**: [percentage if available]
- **Documentation**: [Good/Fair/Poor]
### Key Findings
#### 🔴 Critical Issues
1. [Issue with location and fix]
#### 🟡 Warnings
1. [Warning with recommendation]
#### 💡 Suggestions
1. [Improvement idea]
### Dependencies
- **Total Dependencies**: [count]
- **Outdated**: [list if any]
- **Vulnerabilities**: [list if any]
### Code Complexity
**Most Complex Files**:
1. [file]: [complexity score]
2. [file]: [complexity score]
### Recommendations
1. [Priority action 1]
2. [Priority action 2]
3. [Priority action 3]
### Next Steps
- [ ] Address critical issues
- [ ] Update dependencies
- [ ] Improve test coverage
- [ ] Refactor complex code
### Lessons Learned 📚
**Document key insights from this analysis:**
- What patterns or anti-patterns were most prevalent?
- What areas of technical debt need attention?
- What quality metrics should be tracked going forward?
- What process improvements could prevent similar issues?
**Save to Serena Memory?**
After completing the analysis, ask the user:
> "I've identified several lessons learned from this code analysis. Would you like me to save these insights to Serena memory for future reference? This will help track technical debt and maintain code quality over time."
If user agrees, use Serena MCP `write_memory` to store:
- `"analysis-[category]-[date]"` (e.g., "analysis-code-complexity-2025-10-20")
- `"lesson-analysis-[topic]-[date]"` (e.g., "lesson-analysis-dependency-management-2025-10-20")
- Include: What was analyzed, findings, trends, recommendations, and action items
```
## Guidelines
- Always provide actionable recommendations
- Prioritize findings by impact and effort
- Reference CLAUDE.md standards throughout
- Use MCP servers for deep analysis
- Compare current analysis with past analyses from Serena memory to track trends

152
.claude/commands/explain.md Normal file
View File

@@ -0,0 +1,152 @@
---
description: Explain code in detail - how it works, patterns used, and key concepts
allowed-tools: Read(*), Grep(*), Glob(*), Bash(git log:*)
argument-hint: [file-or-selection]
---
# Explain Command
Provide detailed, educational explanations of code.
## Technology Adaptation
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
Consult CLAUDE.md for:
- **Language**: To explain syntax and language-specific features
- **Frameworks**: To identify framework patterns and conventions
- **Project Patterns**: To explain project-specific architectures
## Instructions
1. **Identify Target**
- If $ARGUMENTS provided: Explain that file/code
- If user has selection: Explain selected code
- Otherwise: Ask what needs explanation
2. **Analyze Code**
- Read CLAUDE.md to understand project context
- Use serena MCP to understand code structure
- Identify patterns, algorithms, and design choices
- Understand dependencies and relationships
3. **Provide Explanation**
Include these sections:
- **Purpose**: What this code does (high-level)
- **How It Works**: Step-by-step breakdown
- **Key Concepts**: Patterns, algorithms, principles used
- **Dependencies**: What it relies on
- **Important Details**: Edge cases, gotchas, considerations
- **In Context**: How it fits in the larger system
4. **Adapt Explanation Level**
- Use clear, educational language
- Explain technical terms when first used
- Provide examples where helpful
- Reference CLAUDE.md patterns when relevant
## MCP Server Usage
### Serena MCP
**Code Navigation**:
- `find_symbol` - Locate symbols to explain
- `find_referencing_symbols` - Understand usage and relationships
- `get_symbols_overview` - Get file structure and organization
- `search_for_pattern` - Find related patterns
**Persistent Memory** (stored in `.serena/memories/`):
- Use `write_memory` to store complex explanations for future reference:
- "explanation-algorithm-[name]"
- "explanation-pattern-[pattern-name]"
- "explanation-architecture-[component]"
- Use `read_memory` to recall past explanations of related code
- Use `list_memories` to find previous explanations
### Memory MCP (Knowledge Graph)
**Temporary Context** (in-memory, cleared after session):
- Use `create_entities` for code elements being explained
- Use `create_relations` to map relationships between components
- Use `add_observations` to document understanding
**Note**: After explanation, store reusable patterns in Serena memory.
### Context7 MCP
- Use `get-library-docs` for framework/library documentation and official explanations
## Output Format
```markdown
## Explanation: [Code/File Name]
### Purpose
[What this code accomplishes and why it exists]
### How It Works
#### Step 1: [High-level step]
[Detailed explanation]
```[language]
[Relevant code snippet]
```
#### Step 2: [Next step]
[Explanation]
### Key Concepts
#### [Concept 1]: [Name]
[Explanation of pattern/algorithm/principle]
#### [Concept 2]: [Name]
[Explanation]
### Dependencies
- **[Dependency 1]**: [What it provides and why needed]
- **[Dependency 2]**: [What it provides and why needed]
### Important Details
- **[Detail 1]**: [Edge case or consideration]
- **[Detail 2]**: [Gotcha or important note]
### In the Larger System
[How this fits into the project architecture from CLAUDE.md]
### Related Code
[Links to related files or functions]
### Further Reading
[References to documentation or patterns]
```
## Example Output Scenarios
### For a Function
- Explain algorithm and complexity
- Show input/output examples
- Highlight edge cases
- Explain why this approach was chosen
### For a Class
- Explain responsibility and role
- Show key methods and their purposes
- Explain relationships with other classes
- Highlight design patterns used
### For a Module
- Explain module's purpose in architecture
- Show public API and how to use it
- Explain internal organization
- Show integration points
## Guidelines
- Start with high-level understanding, then dive into details
- Use analogies when helpful
- Explain "why" not just "what"
- Reference CLAUDE.md patterns
- Be educational but concise
- Assume reader has basic programming knowledge
- Adapt detail level based on code complexity

View File

@@ -0,0 +1,209 @@
---
description: Implement features or changes following best practices and project conventions
allowed-tools: Read(*), Write(*), Edit(*), Grep(*), Glob(*), Bash(*)
argument-hint: [feature-description]
---
# Implement Command
Implement requested features following project conventions and best practices.
## Technology Adaptation
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
Before implementing, consult CLAUDE.md for:
- **Technology Stack**: Languages, frameworks, libraries to use
- **Project Structure**: Where to place new code
- **Code Style**: Naming conventions, formatting rules
- **Testing Requirements**: Test coverage and patterns
- **Build Process**: How to build and test changes
## Instructions
1. **Understand Requirements**
- Parse feature description from $ARGUMENTS or ask user
- Clarify scope and acceptance criteria
- Identify impacted areas of codebase
- Check for existing similar implementations
2. **Review Project Context**
- Read CLAUDE.md for:
- Technology stack and patterns
- Code style and conventions
- Project structure
- Use serena MCP to analyze existing patterns
- Use context7 MCP for framework best practices
3. **Plan Implementation**
- Identify files to create/modify
- Determine appropriate design patterns
- Consider edge cases and error handling
- Plan for testing
- Check if architect agent needed for complex features
4. **Implement Feature**
- Follow CLAUDE.md code style and conventions
- Write clean, maintainable code
- Add appropriate error handling
- Include inline documentation
- Follow project's architectural patterns
- Use MCP servers for:
- `serena`: Finding related code, refactoring
- `context7`: Framework/library documentation
- `memory`: Storing implementation decisions
5. **Add Tests**
- Generate tests using project's test framework from CLAUDE.md
- Cover happy paths and edge cases
- Ensure tests are maintainable
- Consider using test-engineer agent for complex scenarios
6. **Verify Implementation**
- Run tests using command from CLAUDE.md
- Check code style compliance
- Verify no regressions
- Consider using code-reviewer agent for quality check
7. **Document Changes**
- Add/update inline comments where needed
- Update relevant documentation
- Note any architectural decisions
## Implementation Best Practices
### Code Quality
- Keep functions small and focused (< 50 lines typically)
- Follow Single Responsibility Principle
- Use meaningful names from CLAUDE.md conventions
- Add comments for "why", not "what"
- Handle errors gracefully
### Testing
- Write tests alongside implementation
- Aim for coverage targets from CLAUDE.md
- Test edge cases and error conditions
- Make tests readable and maintainable
### Security
- Validate all inputs
- Never hardcode secrets
- Use parameterized queries
- Follow least privilege principle
- Consider security-analyst agent for sensitive features
### Performance
- Avoid premature optimization
- Consider scalability for data operations
- Use appropriate data structures
- Consider optimize command if performance-critical
## MCP Server Usage
### Serena MCP
**Code Navigation**:
- `find_symbol` - Locate existing patterns to follow
- `find_referencing_symbols` - Understand dependencies and impact
- `get_symbols_overview` - Understand file structure before modifying
- `search_for_pattern` - Find similar implementations
- `rename_symbol` - Safely refactor across codebase
**Persistent Memory** (stored in `.serena/memories/`):
- Use `write_memory` to store implementation lessons:
- "lesson-error-handling-[feature-name]"
- "pattern-api-integration-[service]"
- "lesson-performance-optimization-[component]"
- "decision-architecture-[feature-name]"
- Use `read_memory` to recall past implementation patterns
- Use `list_memories` to browse lessons learned
### Memory MCP (Knowledge Graph)
**Temporary Context** (in-memory, cleared after session):
- Use `create_entities` for features/components being implemented
- Use `create_relations` to track dependencies during implementation
- Use `add_observations` to document implementation decisions
**Note**: After implementation completes, store key lessons in Serena memory.
### Context7 MCP
- Use `get-library-docs` for current framework/library documentation and best practices
### Other MCP Servers
- **sequential-thinking**: For complex algorithmic problems
## Agent Collaboration
For complex features, consider delegating to specialized agents:
- **architect**: For system design and architecture decisions
- **test-engineer**: For comprehensive test generation
- **security-analyst**: For security-sensitive features
- **code-reviewer**: For quality assurance before completion
## Output Format
```markdown
## Implementation Complete: [Feature Name]
### Summary
[Brief description of what was implemented]
### Files Changed
- **Created**: [list new files]
- **Modified**: [list modified files]
### Key Changes
1. **[Change 1]**: [Description and location]
2. **[Change 2]**: [Description and location]
3. **[Change 3]**: [Description and location]
### Design Decisions
- **[Decision 1]**: [Why this approach was chosen]
- **[Decision 2]**: [Trade-offs considered]
### Testing
- **Tests Added**: [Count and location]
- **Coverage**: [Percentage if known]
- **Test Command**: `[from CLAUDE.md]`
### How to Use
```[language]
[Code example showing how to use the new feature]
```
### Verification Steps
1. [Step to verify feature works]
2. [Step to run tests]
3. [Step to check integration]
### Next Steps
- [ ] Code review (use /review or code-reviewer agent)
- [ ] Update documentation
- [ ] Performance testing if needed
- [ ] Security review for sensitive features
```
## Usage Examples
```bash
# Implement a specific feature
/implement Add user authentication with JWT
# Implement with more context
/implement Create a payment processing service that integrates with Stripe API, handles webhooks, and stores transactions
# Quick implementation
/implement Add logging to the error handler
```
## Guidelines
- **Always** read CLAUDE.md before starting
- **Follow** existing project patterns
- **Test** your implementation
- **Document** non-obvious decisions
- **Ask** for clarification when requirements are unclear
- **Use** appropriate agents for specialized tasks
- **Verify** changes don't break existing functionality
- **Consider** security implications

View File

@@ -0,0 +1,167 @@
---
description: Optimize code for performance - identify bottlenecks and suggest improvements
allowed-tools: Read(*), Grep(*), Glob(*), Bash(*)
argument-hint: [file-or-function]
---
# Optimize Command
Analyze and optimize code for better performance.
## Technology Adaptation
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
Consult CLAUDE.md for:
- **Performance Tools**: (Profilers, benchmarking tools)
- **Performance Targets**: Expected response times, throughput
- **Infrastructure**: Deployment constraints affecting performance
## Instructions
1. **Identify Target**
- If $ARGUMENTS provided: Focus on that file/function
- Otherwise: Ask user what needs optimization
2. **Analyze Performance**
- Read CLAUDE.md for performance requirements
- Identify performance bottlenecks:
- Inefficient algorithms (O(n²) vs O(n))
- Unnecessary computations
- Database N+1 queries
- Missing indexes
- Excessive memory allocation
- Blocking operations
- Large file/data processing
3. **Propose Optimizations**
- Suggest algorithmic improvements
- Recommend caching strategies
- Propose database query optimization
- Suggest async/parallel processing
- Recommend lazy loading
- Propose memoization for expensive calculations
4. **Provide Implementation**
- Show before/after code comparison
- Estimate performance improvement
- Note any trade-offs (memory vs speed, complexity vs performance)
- Ensure changes maintain correctness
- Add performance tests if possible
## Common Optimization Patterns
### Algorithm Optimization
- Replace nested loops with hash maps (O(n²) → O(n))
- Use binary search instead of linear search (O(n) → O(log n))
- Apply dynamic programming for recursive problems
- Use efficient data structures (sets vs arrays for lookups)
### Database Optimization
- Add indexes for frequent queries
- Use eager loading to prevent N+1 queries
- Implement pagination for large datasets
- Use database-level aggregations
- Cache query results
### Resource Management
- Implement connection pooling
- Use lazy loading for large objects
- Stream data instead of loading entirely
- Release resources promptly
- Use async operations for I/O
## MCP Server Usage
### Serena MCP
**Code Navigation**:
- `find_symbol` - Locate performance-critical code sections
- `find_referencing_symbols` - Understand where slow code is called
- `get_symbols_overview` - Identify hot paths and complexity
- `search_for_pattern` - Find inefficient patterns across codebase
**Persistent Memory** (stored in `.serena/memories/`):
- Use `write_memory` to store optimization findings:
- "optimization-algorithm-[function-name]"
- "optimization-database-[query-type]"
- "lesson-performance-[component]"
- "pattern-bottleneck-[issue-type]"
- Use `read_memory` to recall past performance issues and solutions
- Use `list_memories` to review optimization history
### Memory MCP (Knowledge Graph)
**Temporary Context** (in-memory, cleared after session):
- Use `create_entities` for bottlenecks being analyzed
- Use `create_relations` to map performance dependencies
- Use `add_observations` to document performance metrics
**Note**: After optimization, store successful strategies in Serena memory.
### Context7 MCP
- Use `get-library-docs` for framework-specific performance best practices
### Other MCP Servers
- **sequential-thinking**: For complex optimization reasoning
## Output Format
```markdown
## Performance Optimization Report
### Target: [File/Function]
### Current Performance
- **Complexity**: [Big O notation]
- **Estimated Time**: [for typical inputs]
- **Bottlenecks**: [Identified issues]
### Proposed Optimizations
#### Optimization 1: [Name]
**Type**: [Algorithm/Database/Caching/etc.]
**Impact**: [High/Medium/Low]
**Effort**: [High/Medium/Low]
**Current Code**:
```[language]
[current implementation]
```
**Optimized Code**:
```[language]
[optimized implementation]
```
**Expected Improvement**: [e.g., "50% faster", "O(n) instead of O(n²)"]
**Trade-offs**: [Any downsides or considerations]
#### Optimization 2: [Name]
[...]
### Performance Comparison
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Time Complexity | [O(...)] | [O(...)] | [%] |
| Space Complexity | [O(...)] | [O(...)] | [%] |
| Typical Runtime | [ms] | [ms] | [%] |
### Recommendations
1. [Priority 1]: Implement [optimization] - [reason]
2. [Priority 2]: Consider [optimization] - [reason]
3. [Priority 3]: Monitor [metric] - [reason]
### Testing Strategy
- Benchmark with typical data sizes
- Profile before and after
- Test edge cases (empty, large inputs)
- Verify correctness maintained
### Next Steps
- [ ] Implement optimization
- [ ] Add performance tests
- [ ] Benchmark results
- [ ] Update documentation
```

145
.claude/commands/review.md Normal file
View File

@@ -0,0 +1,145 @@
---
description: Review code for quality, security, and best practices - delegates to code-reviewer agent
allowed-tools: Read(*), Grep(*), Glob(*), Task(*)
argument-hint: [file-or-path]
---
# Review Command
Perform comprehensive code review using the specialized code-reviewer agent.
## Technology Adaptation
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
This command delegates to the code-reviewer agent, which automatically adapts to the project's technology stack from CLAUDE.md.
## Instructions
1. **Determine Scope**
- If $ARGUMENTS provided: Review that specific file/path
- If user has recent changes: Review uncommitted changes
- Otherwise: Ask what needs review
2. **Load Past Review Lessons**
- The code-reviewer agent will automatically load past lessons
- This ensures institutional knowledge is applied to the review
3. **Invoke Code Reviewer Agent**
- Use Task tool with `code-reviewer` subagent
- Pass the target files/path to review
- Agent will check:
- Code quality and best practices
- Potential bugs or issues
- Performance improvements
- Security vulnerabilities
- Documentation needs
- Adherence to CLAUDE.md standards
3. **Present Results**
- Display agent's findings organized by severity
- Highlight critical issues requiring immediate attention
- Provide actionable recommendations
## Why Use This Command
The `/review` command provides a quick way to invoke the code-reviewer agent for code quality checks. The agent:
- Adapts to your tech stack from CLAUDE.md
- Uses MCP servers for deep analysis (serena, context7)
- Follows OWASP and security best practices
- Provides structured, actionable feedback
## Usage Examples
```bash
# Review a specific file
/review src/services/payment-processor.ts
# Review a directory
/review src/components/
# Review current changes
/review
```
## What Gets Reviewed
The code-reviewer agent checks:
### Code Quality
- Code smells and anti-patterns
- Naming conventions (from CLAUDE.md)
- DRY principle violations
- Proper separation of concerns
- Design pattern usage
### Security
- Injection vulnerabilities
- Authentication/authorization issues
- Hardcoded secrets
- Input validation
- Secure data handling
### Performance
- Algorithm efficiency
- Database query optimization
- Unnecessary computations
- Resource management
### Maintainability
- Code complexity
- Test coverage
- Documentation completeness
- Consistency with project style
## MCP Server Usage
The code-reviewer agent automatically uses:
- **serena**: For semantic code analysis
- **context7**: For framework best practices
- **memory**: For project-specific patterns
## Output Format
The agent provides structured output:
```markdown
### Summary
[Overview of findings]
### Critical Issues 🔴
[Must fix before merge]
### Warnings 🟡
[Should address]
### Suggestions 💡
[Nice-to-have improvements]
### Positive Observations ✅
[Good practices found]
### Compliance Check
- [ ] Code style
- [ ] Security
- [ ] Tests
- [ ] Documentation
```
## Lessons Learned
The code-reviewer agent will automatically:
1. Document lessons learned from the review
2. Ask if you want to save insights to Serena memory
3. Store findings for future reference if you agree
This helps build institutional knowledge and improve code quality over time.
## Alternative: Direct Agent Invocation
You can also invoke the agent directly in conversation:
```
"Please use the code-reviewer agent to review src/auth/login.ts"
```
The `/review` command is simply a convenient shortcut.

View File

@@ -0,0 +1,145 @@
---
description: Generate boilerplate code structure for new features (component, service, API endpoint, etc.)
allowed-tools: Read(*), Write(*), Edit(*), Grep(*), Glob(*), Bash(*)
argument-hint: [type] [name]
---
# Scaffold Command
Generate boilerplate code structure for common components.
## Technology Adaptation
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
Consult CLAUDE.md for:
- **Project Structure**: Where files should be created
- **Naming Conventions**: How to name files and components
- **Framework Patterns**: Component structure for the framework
- **Testing Setup**: Test file structure and naming
## Usage
```
/scaffold [type] [name]
```
Examples:
- `/scaffold component UserProfile`
- `/scaffold api user`
- `/scaffold service PaymentProcessor`
- `/scaffold model Product`
## Instructions
1. **Parse Arguments**
- $1 = type (component, api, service, model, test, etc.)
- $2 = name (PascalCase or camelCase as appropriate)
2. **Read Project Patterns**
- Review CLAUDE.md for:
- Project structure and conventions
- Framework in use
- Existing patterns
- Find similar existing files as templates
- Use serena MCP to analyze existing patterns
3. **Generate Structure**
- Create appropriate files per project conventions
- Follow naming from CLAUDE.md
- Include:
- Main implementation file
- Test file (if applicable)
- Interface/types (if applicable)
- Documentation comments
- Imports for common dependencies
4. **Adapt to Framework**
- Apply framework-specific patterns
- Use correct syntax from CLAUDE.md language
- Include framework boilerplate
- Follow project's organization
## Supported Types
Adapt based on CLAUDE.md technology stack:
### Frontend (React, Vue, Angular, etc.)
- `component`: UI component with props/state
- `page`: Page-level component with routing
- `hook`: Custom hook (React)
- `store`: State management slice
- `service`: Frontend service/API client
### Backend (Express, Django, Rails, etc.)
- `api`: API endpoint/route with controller
- `service`: Business logic service
- `model`: Data model/entity
- `repository`: Data access layer
- `middleware`: Request middleware
### Full Stack
- `feature`: Complete feature with frontend + backend
- `module`: Self-contained module
- `test`: Test suite for existing code
### Database
- `migration`: Database migration
- `seed`: Database seed data
- `schema`: Database schema definition
## MCP Server Usage
### Serena MCP
**Code Navigation**:
- `get_symbols_overview` - Find existing patterns to follow
- `find_symbol` - Locate similar components to use as templates
- `search_for_pattern` - Find common boilerplate patterns
**Persistent Memory** (stored in `.serena/memories/`):
- Use `write_memory` to store scaffold patterns:
- "scaffold-pattern-[type]-[framework]"
- "scaffold-convention-[component-type]"
- "lesson-boilerplate-[feature]"
- Use `read_memory` to recall project scaffolding conventions
- Use `list_memories` to review scaffold patterns
### Memory MCP (Knowledge Graph)
**Temporary Context** (in-memory, cleared after session):
- Use `create_entities` for components being scaffolded
- Use `create_relations` to map component dependencies
- Use `add_observations` to document scaffold decisions
**Note**: After scaffolding, store reusable patterns in Serena memory.
### Context7 MCP
- Use `get-library-docs` for framework scaffolding patterns and best practices
## Output Format
After scaffolding:
```markdown
## Scaffolded: [Type] - [Name]
### Files Created
- `[path/to/file1]` - [Description]
- `[path/to/file2]` - [Description]
- `[path/to/file3]` - [Description]
### Next Steps
1. Implement core logic in `[main file]`
2. Add tests in `[test file]`
3. Update imports where needed
4. Run: [test command from CLAUDE.md]
### Example Usage
```[language]
[Code example showing how to use the scaffolded code]
```
### Integration
[How this integrates with existing code]
```

View File

@@ -0,0 +1,219 @@
---
description: Display information about this Claude Code setup - agents, commands, configuration, and capabilities
allowed-tools: Read(*), Glob(*), Bash(ls:*)
disable-model-invocation: false
---
# Setup Info Command
Display comprehensive information about your Claude Code configuration.
## Instructions
Provide a detailed overview of the Claude Code setup for this project.
1. **Scan Configuration**
- List all available agents in `.claude/agents/`
- List all available commands in `.claude/commands/`
- List all output styles in `.claude/output-styles/`
- Check for CLAUDE.md project configuration
- Identify configured MCP servers
2. **Read Project Configuration**
- Read CLAUDE.md to show technology stack
- Check `.claude/settings.json` for configuration
- Identify project structure from CLAUDE.md
3. **Generate Report**
## Output Format
```markdown
# Claude Code Setup Information
## Project Configuration
### Technology Stack
[Read from CLAUDE.md - show languages, frameworks, testing tools]
### Project Structure
[From CLAUDE.md - show directory organization]
---
## Available Agents 🤖
Specialized AI assistants for different tasks:
### [Agent Name] - [Description]
**Use when**: [Trigger scenarios]
**Capabilities**: [What it can do]
**Tools**: [Available tools]
[List all agents found in .claude/agents/]
---
## Available Commands ⚡
Slash commands for quick actions:
### /[command-name] - [Description]
**Usage**: `/command-name [arguments]`
**Purpose**: [What it does]
[List all commands found in .claude/commands/]
---
## Output Styles 🎨
Communication style options:
### [Style Name] - [Description]
**Best for**: [When to use]
**Activate**: [How to enable]
[List all output styles found in .claude/output-styles/]
---
## MCP Servers 🔌
Enhanced capabilities through Model Context Protocol:
### Configured MCP Servers
- **serena**: Semantic code navigation and refactoring
- **context7**: Up-to-date library documentation
- **memory**: Project knowledge graph
- **fetch**: Web content retrieval
- **playwright**: Browser automation
- **windows-mcp**: Windows desktop automation
- **sequential-thinking**: Complex reasoning
[Show which are actually configured based on settings.json or environment]
---
## Quick Start Guide
### For New Features
1. Use `/implement [description]` to create features
2. Use `/test [file]` to generate tests
3. Use `/review [file]` for code quality check
### For Understanding Code
1. Use `/explain [file]` for detailed explanations
2. Use `/analyze [path]` for metrics and analysis
### For Improvements
1. Use `/optimize [function]` for performance
2. Use `/scaffold [type] [name]` for boilerplate
3. Invoke agents: "Use the architect agent to design..."
### For Code Quality
1. Use `/review` before committing
2. Invoke security-analyst for security reviews
3. Use code-reviewer agent for thorough analysis
---
## Customization
### Adding New Commands
1. Create file in `.claude/commands/[name].md`
2. Use [`.COMMANDS_TEMPLATE.md`](.claude/commands/.COMMANDS_TEMPLATE.md) as guide
3. Add frontmatter with description and tools
4. Command becomes available as `/[name]`
### Adding New Agents
1. Create file in `.claude/agents/[name].md`
2. Use [`.AGENT_TEMPLATE.md`](.claude/agents/.AGENT_TEMPLATE.md) as guide
3. Define tools, model, and instructions
4. Invoke with: "Use the [name] agent to..."
### Configuring Technology Stack
Edit [CLAUDE.md](../CLAUDE.md) Technology Stack section:
- Update languages and frameworks
- Define testing tools
- Specify build commands
- All agents/commands adapt automatically
---
## Directory Structure
```
.claude/
├── agents/ # Specialized AI agents
├── commands/ # Slash commands
├── output-styles/ # Response formatting
├── settings.json # Configuration
└── [other files]
CLAUDE.md # Project tech stack config
```
---
## Helpful Resources
- **Templates**: Check `.AGENT_TEMPLATE.md` and `.COMMANDS_TEMPLATE.md`
- **Documentation**: See `.claude/IMPLEMENTATION_COMPLETE.md`
- **Analysis**: See `.claude/TEMPLATE_REVIEW_ANALYSIS.md`
- **Official Docs**: https://docs.claude.com/en/docs/claude-code/
---
## Support
### Getting Help
1. Ask Claude directly: "How do I...?"
2. Read template files for examples
3. Check CLAUDE.md for project conventions
4. Review agent/command markdown files
### Common Tasks
- **Create tests**: `/test [file]` or use test-engineer agent
- **Review code**: `/review [file]` or use code-reviewer agent
- **Add feature**: `/implement [description]`
- **Generate boilerplate**: `/scaffold [type] [name]`
- **Explain code**: `/explain [file]`
- **Analyze codebase**: `/analyze [path]`
- **Optimize performance**: `/optimize [function]`
---
**Setup Version**: 2.0.0 (Technology-Agnostic with MCP Integration)
**Last Updated**: [Current date]
```
## MCP Server Usage
### Serena MCP
**Code Navigation**:
- `list_dir` - Scan .claude directory for agents/commands
- `find_file` - Locate configuration files
- `get_symbols_overview` - Analyze configuration structure
**Persistent Memory** (stored in `.serena/memories/`):
- Use `read_memory` to include custom setup notes if stored
- Use `list_memories` to show available project memories
### Memory MCP (Knowledge Graph)
**Temporary Context**: Not needed for this informational command.
### Context7 MCP
- Not needed for this informational command
## Notes
This command provides a comprehensive overview of:
- What capabilities are available
- How to use them effectively
- How to customize and extend
- Where to find more information
The information is dynamically generated based on actual files in the `.claude/` directory and CLAUDE.md configuration.

45
.claude/commands/test.md Normal file
View File

@@ -0,0 +1,45 @@
---
description: Generate and run tests for code - creates comprehensive test suites
allowed-tools: Read(*), Write(*), Grep(*), Glob(*), Bash(*)
argument-hint: [file-or-path]
---
# Test Command
Generate comprehensive tests or run existing tests.
## Technology Adaptation
**Configuration Source**: [CLAUDE.md](../../CLAUDE.md)
Consult CLAUDE.md for:
- **Test Framework**: (xUnit, Jest, pytest, JUnit, Go test, Rust test, etc.)
- **Test Command**: How to run tests
- **Test Location**: Where tests are stored
- **Coverage Tool**: Code coverage command
## Instructions
1. **Read CLAUDE.md** for test framework and patterns
2. **Determine Action**
- If code file in $ARGUMENTS: Generate tests for it
- If test file in $ARGUMENTS: Run that test
- If directory in $ARGUMENTS: Run all tests in directory
- If no argument: Run all project tests
3. **For Test Generation**
- Analyze code to identify test cases
- Generate tests covering happy paths, edge cases, errors
- Follow CLAUDE.md test patterns
- Use test-engineer agent for complex scenarios
4. **For Test Execution**
- Use test command from CLAUDE.md
- Display results clearly
- Show coverage if available
## MCP Usage
- **serena**: `find_symbol` to analyze code structure
- **context7**: `get-library-docs` for testing best practices